phenomena
EN·ES

The 2010 Ganzfeld Meta-Analysis Explained: 30 Studies, p < 10⁻⁹

·5 min read·Alejandro del Palacio
Ganzfeld meta-analysis article

Two competent statistical groups looked at the same parapsychology dataset and reached opposite conclusions. The disagreement is technical, it remains unresolved in 2026, and it is the cleanest case study available of what live methodological debate in psi research actually looks like.

This article walks through the 2010 Storm-Tressoldi-Di Risio meta-analysis, the 2011 Wagenmakers Bayesian re-analysis, the Storm rebuttal, and what each side concedes and contests.

What is the Ganzfeld procedure?

The Ganzfeld procedure is a sensory-attenuation telepathy protocol developed in the 1970s. A "receiver" subject sits in a comfortable chair with halved ping-pong balls taped over their closed eyes and red light directed at them, while white noise plays through headphones. The protocol creates a uniform sensory field that subjects describe as similar to floating.

In a distant, sound-isolated room, a "sender" subject concentrates on one of four randomly selected target images for approximately 30 minutes.

After the session, the receiver is shown all four target images and asked to identify which one the sender was looking at. Pure chance: 25% (1 in 4).

The protocol's appeal is methodological: target randomization is mechanical, the receiver and sender are physically separated, the judging is blinded, and the "hit" criterion is unambiguous.

What did Storm 2010 find?

Lance Storm, Patrizio Tressoldi, and Lorenzo Di Risio published in Psychological Bulletin in 2010 a meta-analysis aggregating 30 Ganzfeld studies conducted between 1997 and 2008. The total combined sample across studies was 1,498 trials.

The combined hit rate: 32.2%.

The chance baseline: 25%.

The cumulative statistical significance: p < 10⁻⁹ — a one-in-a-billion probability that this result arose from chance alone.

The authors concluded: "These results — taken together — suggest that anomalous information transfer is real."

[Storm L, Tressoldi PE, Di Risio L. (2010). Meta-Analysis of Free-Response Studies, 1992-2008: Assessing the Noise Reduction Model in Parapsychology. Psychological Bulletin, 136(4), 471-485.]

What did Wagenmakers 2011 find?

Eric-Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, and Han van der Maas published a counter-analysis in 2011 in Journal of Personality and Social Psychology. They took the same dataset and re-analyzed it using Bayesian methods rather than the frequentist statistics Storm used.

Their conclusion: no evidence of psi.

The reason for the divergence: Bayesian statistics requires the analyst to specify prior probability assumptions — how likely psi is before we look at the data. Wagenmakers used a prior consistent with "psi is highly unlikely a priori." Under that prior, even the 32% hit rate becomes weak evidence because the prior is doing most of the heavy lifting.

[Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HLJ. (2011). Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi. Journal of Personality and Social Psychology, 100(3), 426-432.]

What did Storm 2013 say in rebuttal?

Storm published a rebuttal pointing out that the Wagenmakers re-analysis used a prior so skeptical that even strong empirical evidence could not overcome it. He showed that with a more neutral prior — one that does not pre-specify psi as impossible — the same dataset supports psi at substantial Bayes factor levels.

The technical question: what prior probability should be used to evaluate a phenomenon whose mechanism is unknown? Neither side has a knock-down answer.

[Storm L, Tressoldi PE, Di Risio L. (2013). Meta-analysis of ESP studies, 1987-2010. Frontiers in Psychology, 4, 209.]

What can we conclude from the unresolved debate?

Three things, all uncomfortable for partisans of either side:

1. The data exists and is significant under one reasonable analysis. Storm 2010's frequentist analysis produced p < 10⁻⁹. This is not a Type-I error you can hand-wave away. The 32% hit rate across 1,498 trials is well above chance.

2. The data does not exist as "proof" under another reasonable analysis. Wagenmakers 2011's Bayesian analysis, using a skeptical prior, dissolves the result. The reason is not that the data changes — the reason is that the prior is doing the work.

3. The choice of prior is where the actual disagreement lives. If you believe psi is mechanistically impossible (Wagenmakers's implicit position), then any result is consistent with no psi. If you believe psi is mechanistically possible but rare (Storm's implicit position), the result is strong evidence.

Both positions are defensible. The data alone cannot decide between them. This is a feature of contested empirical claims in fundamental science, not a bug.

Why does this matter for the broader pillar?

The Storm-Wagenmakers debate is the live methodological frontier for psi research in 2026. It's also the cleanest example of how the data is the data but the interpretation is theory-laden. Anyone who tells you the field is uniformly debunked is misrepresenting the literature; anyone who tells you it is uniformly proven is doing the same.

For the full picture of where Ganzfeld fits in the psi evidence base, see the Psi Research Evidence pillar.

Sources

/// RELATED TRANSMISSIONS

/// PUBLISHED 2026-05-11

/// PART OF Phenomena CLUSTER