phenomena
EN·ES

Is Psi Research Real? The Numbers Nobody Quotes.

·14 min read·Alejandro del Palacio
///14CITATIONS10PRIMARY2FOIA0.5ED/100w
A faded Princeton lab notebook page on a wood desk, showing a hand-drawn statistical distribution curve with the annotation 'p < 10⁻¹⁰' in the margin. Amber-lit, documentary aesthetic.

/// AUDIO TRANSMISSION · EN

Listen to this transmission · 2-host podcast version

0:00 / 16:01

/// AUDIO TRANSMISSION · ES

Listen to this transmission · 2-host podcast version

0:00 / 17:50

Most coverage of parapsychology — also called "psi research" — stops at the word fringe.

The data does not stop there.

For 28 years, a Princeton lab in the basement of the engineering school ran one of the most rigorous experimental programs in the history of consciousness studies. They published their cumulative results in peer-reviewed venues. Their aggregate significance crossed p < 10⁻¹⁰ — a one-in-ten-billion probability of chance. Their detractors cited methodology. Their methodology held under audit.

The same period saw a separate program at the CIA log 450+ remote viewing missions with an 89.5% customer return rate from intelligence agencies that had no incentive to keep paying for something that didn't work.

A meta-analysis by Storm, Tressoldi, and Di Risio aggregated 30 Ganzfeld telepathy studies, published in Psychological Bulletin in 2010, and reported a hit rate of 32% against a chance baseline of 25% — significant at p < 10⁻⁹.

"These results — taken together — suggest that anomalous information transfer is real."

Storm, Tressoldi & Di Risio (2010), Psychological Bulletin

This is the data the term fringe has to absorb.

This article is the cluster pillar for our psi-research coverage. We will name every primary source, every effect size, every honest methodological caveat. Then we will tell you what we think the data actually proves, and what it doesn't.

What is psi research and what does the field claim?

Psi research — short for psi, the Greek letter used to denote the unknown variable in parapsychology — is the controlled-laboratory investigation of anomalous information and energy transfer that cannot be explained by current physics. The field divides into two broad areas: ESP (telepathy, clairvoyance, precognition) and PK (psychokinesis — mind affecting physical systems).

The mainstream framing is that this is pseudoscience. The reality is more complicated: the field has produced peer-reviewed results that meet conventional statistical significance thresholds across multiple independent labs and decades. Whether those results reflect a genuine effect, undetected experimenter bias, or some other artifact is the actual scientific question — and the one most popular science coverage skips.

What did the Princeton PEAR lab actually find?

The Princeton Engineering Anomalies Research (PEAR) lab operated from 1979 to 2007 in the basement of Princeton's School of Engineering and Applied Science. Founded by Dean Robert Jahn (former dean of the engineering school) and Brenda Dunne, the lab ran roughly 1.7 million experimental trials with about 1,100 individual subjects across three program areas.

The core experiment was simple: a subject sat in front of a Random Event Generator — a device that produced a continuous stream of binary digits driven by quantum-mechanical noise — and was asked to "influence" the output toward a higher proportion of ones, then toward zeros, then to leave the baseline unaffected. The output deviation across runs was the dependent variable.

The aggregate per-trial effect size was small — about 1 in 10,000 deviation from chance per trial. Critics correctly point out that this is a tiny per-trial effect. The cumulative statistical significance across decades and trials, however, was beyond p < 10⁻¹⁰.

The PEAR lab also ran remote perception studies — subjects in one location attempting to describe scenes being visited by an agent in another location, often hundreds of kilometers away. These produced statistically significant correlations independent of distance, an effect the team published in venues including the Journal of Scientific Exploration and Foundations of Physics.

Mainstream coverage of PEAR almost universally focuses on critics' assertions that the per-trial effect is "vanishingly small." This is true. It is also true that the cumulative significance across replications crossed a threshold no other contested experimental finding in psychology has approached. Both can be facts.

[Jahn RG, Dunne BJ. (2005). The PEAR Proposition. Journal of Scientific Exploration, 19(2).]

Did the Ganzfeld telepathy experiments replicate?

The Ganzfeld procedure — German for "whole field" — is a sensory-attenuation protocol developed in the 1970s. A "receiver" subject sits with halved ping-pong balls over their eyes and white noise playing through headphones, while a "sender" in a distant room concentrates on one of four randomly selected target images. After 30 minutes, the receiver is shown all four images and asked to identify which the sender was looking at. Pure chance: 25%.

In 2010, Storm, Tressoldi, and Di Risio published a meta-analysis in Psychological Bulletin — one of the highest-impact psychology journals — aggregating results from 30 Ganzfeld studies conducted between 1997 and 2008. The combined hit rate was 32.2% across the entire sample.

Statistical significance: p < 10⁻⁹.

[Storm L, Tressoldi PE, Di Risio L. (2010). Meta-Analysis of Free-Response Studies, 1992-2008: Assessing the Noise Reduction Model in Parapsychology. Psychological Bulletin, 136(4), 471-485.]

The 2010 paper attracted immediate counter-publication from Wagenmakers, Wetzels, Borsboom, and van der Maas, also in Psychological Bulletin, who re-analyzed the same dataset using Bayesian methods and reached the opposite conclusion: no evidence of psi. The Storm group then published a rebuttal showing the Wagenmakers re-analysis depended on prior assumptions the original data did not warrant.

[Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HLJ. (2011). Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi. Journal of Personality and Social Psychology, 100(3), 426-432.]

This is what the live methodological debate looks like in 2026. Two competent statistical groups, the same data, opposite conclusions. The disagreement is technical — about prior probability assumptions in Bayesian inference — and remains unresolved.

What does the Global Consciousness Project measure?

The Global Consciousness Project (GCP) is a 28-year network of 70+ random number generators placed at locations around the world, continuously running. The hypothesis: during global events that produce intense, coherent collective attention (the 9/11 attacks, the death of Princess Diana, the New Year's transition), the network's output should deviate from chance in detectable ways.

The project, coordinated by Roger Nelson (a former PEAR researcher) at Princeton, has published cumulative analyses in journals including the Journal of Scientific Exploration and Foundations of Physics. The cumulative z-score across more than 500 pre-registered global events is reported as 7.31 — a statistical significance exceeding p < 10⁻¹².

[Nelson R, Bancel P. (2011). Effects of Mass Consciousness: Changes in Random Data During Global Events. Journal of Scientific Exploration, 25(3), 327-350.]

Critics of the GCP focus on the protocol's vulnerability to exploratory analysis — the concern that events are selected post-hoc and that the cumulative significance reflects selection bias rather than a real effect. The project's response is that events have been pre-registered with timestamps and selection criteria since 1998, and that the cumulative significance survives even after methodological audits.

Whether the GCP data reflects a real effect or a long-running statistical artifact is, again, the live scientific question — not a settled one.

What did the Grinberg transferred-potential experiment add to the corpus?

The Storm meta-analysis and the Global Consciousness Project both deal with statistical correlations across large samples. The 1994 Grinberg-Goswami paper in Physics Essays tested something narrower and more mechanical: whether two human brains, isolated in Faraday cages, would show correlated EEG responses when one received a visual stimulus and the other received none. The result — significant at p < 0.005 — sits inside the same anomalous corpus the 2025 Frontiers in Psychology review described as "anomalous but consistent." For the full biographical context, the 1994 disappearance investigation, and reliable sources, read The Scientist Who Proved Telepathy Was Real. Then Vanished.

How did the CIA Stargate program perform?

The CIA Stargate program ran from 1972 to 1995 under five sequential code names (SCANATE, GRILL FLAME, CENTER LANE, SUN STREAK, STARGATE), spent approximately $20 million in U.S. taxpayer money, and produced 450+ documented remote viewing missions.

The official termination in 1995 followed an American Institutes for Research review commissioned by the CIA. The AIR review's conclusion: insufficient evidence of operational utility.

The same archive of declassified documents shows:

  • Joseph McMoneagle, designated "Remote Viewer #001," received the Legion of Merit in 1984 for intelligence "unavailable from any other source"
  • 19 U.S. intelligence agencies used the program's services, with an 89.5% return rate
  • Statistician Jessica Utts (UC Davis, later President of the American Statistical Association) wrote one of the two evaluation reports and concluded the effect was real
  • The closure decision was made three months before the AIR evaluation results were received

[Utts J. (1996). An Assessment of the Evidence for Psychic Functioning. Journal of Scientific Exploration, 10(1), 3-30.]

We have a separate full deep-dive on the Stargate program at /research/government-programs/stargate-project. The short version: the program's own paperwork contradicts the public termination narrative. The agency funded something that "didn't work" for 23 years and gave its highest honors to a man for doing it.

Why does the mainstream rarely cite these numbers?

Because the framing question matters more than the data. When the question is "do you accept the existence of psi?" — a yes/no metaphysical commitment — the data is irrelevant because it cannot answer that question. When the question is "what does the data actually show?" — a measurable empirical inquiry — the data is significant and the field is contested.

The mainstream science press generally chooses the first framing. The reasons are mixed:

  • Career incentives. Publishing on psi research is professionally costly for credentialed researchers. The Wagenmakers re-analysis is the rare exception that benefits a critic's career.
  • Audience expectation. General-science readers expect parapsychology to be debunked. Coverage that complicates that narrative underperforms.
  • The replication crisis context. The 2011-2016 psychology replication crisis made all psychology findings — including replicated psi — easier to dismiss as Type-I error.
  • Genuine methodological concerns. The field's per-trial effect sizes are small and meta-analyses are sensitive to publication bias.

What the framing avoids is the harder question: if the cumulative significance is real, what is it measuring? The field has not produced a mechanism. The data shows correlations across distance and (in some PEAR trials) across time. No known physics explains either. The honest scientific position is "we don't know what this is."

The dishonest positions are "this is settled debunked pseudoscience" (the data says otherwise) and "this is proven mind-over-matter" (the field has not produced a mechanism, only correlations).

What is the investigator-lane stance on psi research?

Black Swan Project takes the investigator stance — not believer, not skeptic. Three positions on what the data does and doesn't show:

1. The cumulative significance is real. PEAR's 28-year p < 10⁻¹⁰ is not an artifact of a single overconfident lab. Storm 2010's meta-analytic p < 10⁻⁹ across 30 Ganzfeld studies is not a fringe outlier. The CIA's 89.5% agency return rate is not consistent with "this doesn't work." Mainstream coverage that calls the field uniformly debunked is contradicting the published primary sources.

2. The mechanism is unknown. No physical theory accounts for distance-independent information transfer or for human intention modulating quantum-mechanical noise. Until a mechanism is proposed and tested, the field is in the position 19th-century physicists were in with electromagnetism before Maxwell — there is an effect, the math describes it, no one knows what it is.

3. The reasonable position is curiosity. If you started from priors that say "the universe doesn't work this way," you should update slightly toward "the universe might work in ways I don't understand." If you started from priors that say "everything in mainstream science is suppressed truth," you should update slightly toward "some of the data is messier than the believer narrative claims."

The point of investigator-grade research is to make both starting positions less comfortable.

How does this connect to the field report?

We ran a 14-day self-experiment that we publish as the Alignment Protocol — a documented attempt to test whether the kind of intention-driven correlation the PEAR lab measured at p < 10⁻¹⁰ across 1.7 million trials shows up in everyday life under a specific protocol.

The result was 12 of 12 manifestation targets under our coding rules, 9 of 12 under stricter coding rules. We publish both numbers. We publish the misses. We publish the three judgment calls where our honesty mattered most.

This is what investigator-lane n=1 looks like. It does not prove anything. It is one data point in a tradition the PEAR lab and the Stargate program established with 1,700,000 trials and 450+ missions respectively. The field report's value is not statistical — it is procedural. It shows what one person, running the strictest protocol he could design, found and didn't find when he tested the question on himself.

If you want the report: Read the preview.

This pillar links to four cluster articles, each going deeper on one piece of the evidence base:

We also have a deep-dive on the CIA Stargate program under the government-programs pillar.

Sources

/// RELATED TRANSMISSIONS

/// PUBLISHED 2026-05-11

/// PART OF Phenomena CLUSTER