Psi Research Criticisms: The Strongest Counter-Arguments

An honest investigator-lane treatment of psi research has to engage seriously with the strongest skeptical arguments. Dismissing them is dishonest; capitulating to them is also dishonest.
This article walks through the three strongest counter-arguments to psi research evidence, what they get right, and where they elide. The goal: leave you with a defensible position regardless of whether you started skeptical or curious.
Wagenmakers's Bayesian prior critique
Eric-Jan Wagenmakers and colleagues published in 2011 a re-analysis of the Storm 2010 Ganzfeld meta-analysis using Bayesian rather than frequentist statistics. Their conclusion: no evidence of psi.
The argument: Bayesian inference requires the analyst to specify a prior probability — how likely the phenomenon is before looking at data. Wagenmakers used a prior consistent with "psi is mechanistically impossible." Under that prior, the same 32% hit rate that Storm's frequentist analysis flagged as p < 10⁻⁹ produces a Bayes factor that fails to overcome the prior.
What it gets right: Bayesian analysis is the appropriate framework for evaluating extraordinary claims, and prior probability matters. If you have strong prior reasons to disbelieve psi, the data must be correspondingly strong to update you.
What it elides: the choice of prior is not data — it is a starting assumption. Wagenmakers's prior assumes psi is impossible to a degree that no realistic empirical evidence could overcome. Critics correctly note that this collapses Bayesian inference into an unfalsifiability argument. Storm's 2013 rebuttal shows that more neutral priors recover the original significance.
[Wagenmakers EJ et al. (2011). Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi. Journal of Personality and Social Psychology, 100(3), 426-432.]
Stanovich and the publication-bias argument
Keith Stanovich, in his textbook How to Think Straight About Psychology, argues that small effects across many studies are particularly vulnerable to publication bias — the "file drawer" problem. Researchers and journals preferentially publish significant results, leaving an unknown number of null results in file drawers.
If publication bias is real and severe, then meta-analyses aggregating only published studies overstate true effect sizes.
What it gets right: publication bias is a documented problem in psychology, not exclusive to parapsychology. The 2011-2016 replication crisis demonstrated that even mainstream findings are vulnerable.
What it elides: parapsychology has been one of the few fields with explicit publication-of-null-results policies. The Bösch, Steinkamp, and Boller 2006 meta-analysis in Psychological Bulletin explicitly modeled publication bias and concluded that even after correction, a small but significant cumulative effect remained.
[Bösch H, Steinkamp F, Boller E. (2006). Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators. Psychological Bulletin, 132(4), 497-523.]
[Stanovich KE. How to Think Straight About Psychology (12th edition, 2019). Pearson.]
Hyman and the protocol-failure analysis
Ray Hyman, a CSICOP-affiliated psychologist, has spent decades auditing parapsychology protocols. His most cited critique focuses on early Ganzfeld studies (1970s-1980s) that had documented sensory-leakage problems: the receiver could sometimes hear the sender through inadequate sound-isolation, or could pick up cues from experimenters who knew the target.
These protocol failures inflated effect sizes in the early literature.
What it gets right: the early Ganzfeld studies were methodologically weaker than later replications. Hyman's audit identified real problems, and the parapsychology community responded by implementing stricter protocols.
What it elides: the post-Hyman literature — the 1997-2008 studies aggregated in the Storm 2010 meta-analysis — used the auto-Ganzfeld protocol that Hyman himself contributed to designing. These studies were specifically built to address his criticisms. They produced p < 10⁻⁹ cumulative significance.
[Hyman R. (1985). The Ganzfeld Psi Experiment: A Critical Appraisal. Journal of Parapsychology, 49(1), 3-49.]
What the strongest critics concede
Reading the skeptical literature carefully reveals what even the most rigorous critics concede:
- The cumulative statistical significance across PEAR (1.7M trials) and Storm 2010 (30 studies) is not zero
- The methodological audits identify real concerns but do not eliminate the cumulative significance
- The disagreement between proponents and skeptics is largely about interpretation under different priors, not about the data itself
This is the actual state of the field in 2026. It is more complicated than "settled debunked pseudoscience" and more complicated than "proven mind-over-matter."
What's the honest takeaway?
Three points worth holding simultaneously:
1. The skeptical critiques are serious and partially correct. Bayesian-prior choice matters. Publication bias is a real concern. Early protocol failures were real. Anyone defending psi research must engage with these critiques rather than dismissing them.
2. The cumulative data survives the audit. After correcting for publication bias (Bösch 2006), using neutral Bayesian priors (Storm 2013), and applying post-Hyman protocols (Storm 2010), the cumulative significance remains.
3. The interpretation is the open question. "Is psi real?" is a metaphysical commitment. "What is the cumulative data measuring?" is the actual empirical question, and it is unresolved.
For the broader evidence base, see the Psi Research Evidence pillar. For one person's n=1 test of the question, see the Alignment Protocol preview.
Sources
- Wagenmakers EJ et al. (2011). The Case of Psi. JPSP
- Storm L, Tressoldi PE, Di Risio L. (2013). Meta-analysis of ESP studies, 1987-2010. Frontiers in Psychology
- Bösch H, Steinkamp F, Boller E. (2006). Examining Psychokinesis. Psychological Bulletin, 132(4)
- Hyman R. (1985). The Ganzfeld Psi Experiment. Journal of Parapsychology, 49(1)
- Stanovich KE. (2019). How to Think Straight About Psychology (12th ed.). Pearson — textbook
/// RELATED TRANSMISSIONS
The 2010 Ganzfeld Meta-Analysis Explained: 30 Studies, p < 10⁻⁹
Storm, Tressoldi & Di Risio (2010) aggregated 30 Ganzfeld telepathy studies in Psychological Bulletin. Combined hit rate…
READ →
The Global Consciousness Project: 28 Years of Worldwide RNGs
The Global Consciousness Project ran 70+ random number generators across the globe for 28 years, looking for output devi…
READ →
What the PEAR Lab Actually Found: 28 Years of REG Data
The Princeton Engineering Anomalies Research lab ran 1.7 million Random Event Generator trials with 1,100 subjects over…
READ →
Is Psi Research Real? The Numbers Nobody Quotes.
Mainstream coverage of psi research stops at 'fringe.' The data does not. The Princeton PEAR lab ran 28 years of replica…
READ →