phenomena
EN·ES

What the PEAR Lab Actually Found: 28 Years of REG Data

·6 min read·Alejandro del Palacio
PEAR Lab REG experiment article

Robert Jahn was Dean of the School of Engineering and Applied Science at Princeton when he founded what became the most rigorous parapsychology research program in U.S. academic history.

For 28 years, his lab operated in the basement of Princeton's engineering school. They ran 1.7 million experimental trials. They published cumulative analyses in peer-reviewed journals. Their critics audited their methodology. Their cumulative statistical significance crossed p < 10⁻¹⁰.

This article is the deep dive on what they actually did, what they actually found, and what the audit literature says about both.

What is a Random Event Generator?

A Random Event Generator (REG) is a hardware device that produces a continuous stream of binary digits — 1s and 0s — driven by an unpredictable physical source. The PEAR lab's REGs used quantum-mechanical noise (the random decay of electronic components) as the entropy source. Unlike software pseudo-random generators, an REG's output cannot be predicted from any prior state.

The expected baseline: exactly 50% ones, 50% zeros, with statistical fluctuations governed by the binomial distribution.

The experimental question: can a human subject, sitting in front of an REG with an instruction to "intend" higher or lower output, produce a measurable deviation from the expected baseline?

What was the PEAR experimental protocol?

The protocol was rigorously controlled. Each session consisted of three conditions in randomized order:

  1. HIGH — subject attempts to influence output toward more 1s
  2. LOW — subject attempts to influence output toward more 0s
  3. BASELINE — subject does not attempt to influence anything

The subject sat in a comfortable chair facing a display screen showing the running output. The REG produced 200-bit samples once per second, and the cumulative deviation was tracked across many samples per run.

A typical session involved thousands of samples per condition. A subject's data was logged into a database that eventually contained 1.7 million trials across 1,100 subjects.

[Jahn RG, Dunne BJ. (2005). The PEAR Proposition. Journal of Scientific Exploration, 19(2), 195-245.]

What was the per-trial effect size?

Small.

The average per-trial deviation from chance was approximately 1 part in 10,000 — meaning that across all sessions, the HIGH condition produced about 0.01% more 1s than chance would predict, and the LOW condition produced about 0.01% more 0s.

This is the criticism that mainstream coverage correctly emphasizes: a 0.01% effect is barely measurable in a single session. You need enormous sample sizes to detect it. The PEAR lab argued — and is correct — that this is exactly why they collected 1.7 million trials.

What was the cumulative significance?

Across 28 years of replicated trials, the cumulative statistical significance exceeded p < 10⁻¹⁰.

That is a one-in-ten-billion probability that the observed deviation arose from chance alone. By way of comparison, the discovery of the Higgs boson at CERN required a 5-sigma threshold corresponding to roughly p < 10⁻⁷. PEAR's cumulative result exceeds that threshold by three orders of magnitude.

[Jahn RG, Dunne BJ, Nelson RD, Dobyns YH, Bradish GJ. (1997). Correlations of Random Binary Sequences with Pre-stated Operator Intention. Journal of Scientific Exploration, 11(3), 345-367.]

What did the methodological audits find?

The PEAR lab opened its protocol to external review and published methodological papers explicitly responding to criticism. The principal audit findings:

1. Cumulative significance is robust to fair-statistical-analysis choices. Audits attempted alternative statistical analyses — different baselines, different chunking methods, different exclusion criteria. The cumulative significance remained well below p < 10⁻⁴ across reasonable analytical variations.

2. Operator effects exist. Different subjects produced different effect sizes, and a small number of "high-yield" operators produced disproportionate contributions to the cumulative effect. This is a fact that PEAR explicitly reported and that critics use to question whether the effect is replicable across populations.

3. Optional-stopping concerns. Critics raised the concern that PEAR could have stopped sessions early to lock in favorable results. The lab's response was that their protocol was pre-specified with fixed sample sizes per condition, and the published analyses included all collected data.

4. The "file drawer" problem. Some critics argued that unpublished null results could erase the cumulative significance. PEAR's response was that they published their cumulative database including null results, and the cumulative significance survives inclusion of all reported data.

[Bösch H, Steinkamp F, Boller E. (2006). Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators. Psychological Bulletin, 132(4), 497-523.]

The Bösch, Steinkamp, and Boller 2006 meta-analysis in Psychological Bulletin — the most rigorous independent audit of REG psychokinesis research — concluded that the cumulative effect was real but smaller than PEAR's own analyses suggested, and that publication bias could plausibly account for a substantial portion of the effect.

Can we say PEAR proved psychokinesis?

No. PEAR proved a cumulative statistical anomaly. The interpretation of that anomaly is the open scientific question.

What we can say:

  • A statistically significant deviation from chance occurred across 1.7 million trials
  • The deviation correlated with operator intention condition
  • Standard methodological audits did not eliminate the deviation
  • The deviation has been partially replicated by other groups (Bösch et al. 2006)

What we cannot say:

  • That the mechanism is "consciousness affecting physical systems"
  • That the result rules out subtle experimenter effects we have not yet identified
  • That the effect would scale outside the lab to practical applications

The honest investigator-lane position: PEAR produced an anomaly. The anomaly survives audit. The interpretation of the anomaly is the actual scientific question — and remains contested.

How does this connect to the broader pillar?

PEAR is one piece of the psi research evidence base. The 2010 Storm et al. Ganzfeld meta-analysis (aggregating 30 telepathy studies) reached similar cumulative significance via a different protocol. The Global Consciousness Project (GCP) extended the REG approach to a worldwide network. The CIA Stargate program used remote viewing under operational conditions.

All four programs converge on the same uncomfortable conclusion: something is happening at statistical significance levels that cannot be dismissed as noise, and no known physics explains what.

For the full evidence base, see the Psi Research Evidence pillar. For the related pillar covering the field experiment I ran on myself, see the Alignment Protocol preview.

Sources

/// RELATED TRANSMISSIONS

/// PUBLISHED 2026-05-11

/// PART OF Phenomena CLUSTER