PEAR yet again: the theory behind paranormal gibberish (repost from blogger)

This is a repost from GM/BMs old home; the original article appeared
[here][old]. I’m reposting because someone is attempting to respond to this
article, and I’d rather keep all of the ongoing discussions in one place. I also
think it’s a pretty good article, which some of the newer readers here may not
have seen. As usual for my reposts, I’ve fixed the formatting and made a few
minor changes. This article was originally posted on May 29.
I’ve been looking at PEAR again. I know it may seem sort of like beating a dead
horse, but PEAR is, I think, something special in its way: it’s a group of
people who pretend to use science and mathematics in order to support all sorts
of altie-woo gibberish. This makes them, to me, particularly important targets
for skeptics: if they were legit, and they were getting the kinds of results
that they present, they’d be demonstrating something fascinating and important.
But they’re not: they’re trying to use the appearance of science to undermine
science. And they’re incredibly popular among various kinds of crackpottery:
what led me back to them this time is the fact that I found them cited as a
supporting reference in numerous places:
1. Two different “UFOlogy” websites;
2. Eric Julien’s dream-prophecy of a disastrous comet impact on earth (which was supposed to have happened back in May; he’s since taken credit for *averting* said comet strike by raising consciousness);
3. Three different websites where psychics take money in exchange for psychic predictions or psychic healing;
4. Two homeopathy information sites;
5. The house of thoth, a general clearinghouse site for everything wacky.
Anyway, while looking at the stuff that all of these wacko sites cited from
PEAR, I came across some PEAR work which isn’t just a rehash of the random
number generator nonsense, but instead an attempt to define, in mathematical
terms, what “paranormal” events are, and what they mean.
It’s quite different from their other junk; and it’s a really great example of
one of the common ways that pseudo-scientists misuse math. The paper is called
“M* : Vector Representation of the Subliminal Seed Regime of M5“, and you can
find it [here][pear-thoth].
The abstract gives you a pretty good idea of what’s coming:
>A supplement to the M5 model of mind/matter interactions is proposed
>wherein the subliminal seed space that undergirds tangible reality and
>conscious experience is characterized by an array of complex vectors whose
>components embody the pre-objective and pre-subjective aspects of their
>interactions. Elementary algebraic arguments then predict that the degree of
>anomalous correlation between the emergent conscious experiences and the
>corresponding tangible events depends only on the alignment of these
>interacting vectors, i. e., on the correspondence of the ratios of their
>individual ”hard” and ”soft” coordinates. This in turn suggests a
>subconscious alignment strategy based on strong need, desire, or shared purpose
>that is consistent with empirical experience. More sophisticated versions of
>the model could readily be pursued, but the essence of the correlation process
>seems rudimentary.
So, if we strip out the obfuscation, what does this actually say?
Umm… “*babble babble* complex vectors *babble babble babble* algebra *babble babble* ratios *babble babble* correlation *babble babble*.”
Seriously: that’s a pretty good paraphrase. That entire paragraph is *meaningless*. It’s a bunch of nonsense mixed in with a couple of pseudo-mathematical terms in order to make it sound scientific. There is *no* actual content in that abstract. It reads like a computer-generated paper from
[SCIgen][scigen] .
(For contrast, here’s a SCIgen-generated abstract: “The simulation of randomized algorithms has deployed model checking, and current trends suggest that the evaluation of SMPs will soon emerge. In fact, few statisticians would disagree with the refinement of Byzantine fault tolerance. We confirm that although multicast systems [16] can be made homogeneous, omniscient, and autonomous, the acclaimed low-energy algorithm for the improvement of DHCP [34] is recursively enumerable.”)
Ok, so the abstract is the pits. To be honest, a *lot* of decent technical papers have really lousy abstracts. So let’s dive in, and look at the actual body of the paper, and see if it improves at all.
They start by trying to explain just what their basic conceptual model is. According to the authors, the world is fundamentally built on consciousness; and that most events start in a a pre-conscious realm of ideas called the “*seed region*”; and that as they emerge from the seed region into experienced reality, they manifest in two different ways; as “events” in the material domain, and as “experiences” or “perceptions” in the mental domain. They then claim that in order for something from the seed region to manifest, it requires an interaction of at least two seeds.
Now, they try to start using pseudo-math to justify their gibberish.
Suppose we have two of these seed beasties, S1, and S2. Now, suppose we have a mathematical representation of them as “vectors”. They write that as [S]).
A “normal” event, according to them, is one where the events combine in what they call a “linear” way (scare-quotes theirs): [S1] + [ S2] = [S1 + S2). On the other hand, events that are perceived as anomalous are events for which that’s not true: [S1] + [S2] ≠[S1 + S2].
We’re already well into the land of pretend mathematics here. We have two non-quantifiable “seeds”; but we can add them together… We’re pulling group-theory type concepts and notations, and applying them to things that absolutely do not have any of the prerequisites for those concepts to be meaningful.
But let’s skip past that for a moment, because it gets infinitely sillier shortly.
They draw a cartesian graph with four quadrants, and label them (going clockwise from the first quadrant): T (for tangible), I (for intangible – aka, not observable in tangible reality), U (for unconscious), and C (conscious). So the upper-half is what they consider to be observable, and the bottom half is non-observable; and the left side is mind and the right side is matter. Further, they have a notion of “hard” and “soft”; objective is hard, and subjective is soft. They proceed to give a list of ridiculous pairs of words which they claim are different ways of expressing the fundamental “hard/soft” distinction, including “masculine/feminine”, “particulate/wavelike”, “words/music”, and “yang/yin”.
Once they’ve gotten here, they get to my all-time favorite PEAR statement; one which is actually astonishingly obvious about what they’re really up to:
>It is then presumed that if we appropriate and pursue some established
>mathematical formalism for representing such components and their interactions,
>the analytical results may retain some metaphoric relevance for the emergence
>of anomalous mind/matter manifestations.
I love the amount of hedging involved in that sentence! And the admission that
they’re just “appropriating” a mathematical formalism for no other purpose than
to “retain some metaphoric relevance”. I think that an honest translation of
that sentence into non-obfuscatory english is: “If we wrap this all up in
mathematical symbols, we can make it look as if this might be real science”.
So, they then proceed to say that they can represent the seeds as complex numbers: S = s + iσ. But “s” and “sigma” can’t just be simply “pre-material” and “pre-mental”, because that would be too simple. Instead, they’re “hard” and “soft”; even thought we’ve just gone through the definition which categorized hard/soft as a better characterization of material and mental. Oh, and they have to make sure that this looks sufficiently mathematical, so instead of just saying that it’s a complex, they present it in *both* rectangular and polar coordinates, with the equation for converting between the two notations written out inside the same definition area. No good reason for that, other than have something more impressive looking.
Then they want to define how these “seeds” can propagate up from the very lowest reaches of their non-observable region into actual observable events, and for no particular reason, they decide to use the conjugate product equation randomly selected from quantum physics. So they take a random pair of seeds (remember that they claim that events proceed from a combination of at least two seeds), and add them up. They claim that the combined seed is just the normal vector addition (which they proceed to expand in the most complex looking way possible); and they also take the “conjugate products” and add them up (again in the most verbose and obfuscatory way possible); and then take the different between the two different sums. At this point, they reveal that for some reason, they think that the simple vector addition corresponds to “[S1] + [S2]” from earlier; and the conjugate is “[S1+S2]”. No reason for this correspondence is give; no reason for why these should be equal for “non-anomalous” events; it’s just obviously the right thing to do according to them. And then, of course, they repeat the whole thing in polar notation.
It just keeps going like this: randomly pulling equations out of a hat for no particular reason, using them in bizzarely verbose and drawn out forms, repeating things in different ways for no reason. After babbling onwards about these sums, they say that “Also to be questioned is whether other interaction recipes beyond the simple addition S1,2 = S1 + S2 could profitably be explored.”; they suggest multiplication; but decide against it just because it doesn’t produce the results that they want. Seriously! In their words “but we show that this doesn’t generate similar non-linearities”: that is, they want to see “non-linearities” in the randomly assembled equations, and since multiplying doesn’t have that, it’s no good to them.
Finally, we’re winding down and getting to the end: the “summary”. (I was taught that when you write a technical paper, the summary or conclusion section should be short and sweet. For them, it’s two full pages of tight text.) They proceed to restate things, complete with repeating the gibberish equations in yet another, slightly different form. And then they really piss me off. Statement six of their summary says “Elementary complex algebra then predicts babble babble babble”. Elementary complex algebra “predicts” no such thing. There is no real algebra here, and nothing about algebra would remotely suggest anything like what they’re claiming. It’s just that this is a key step in their reasoning chain, and they absolutely cannot support it in any meaningful way. So they mask it up in pseudo-mathematical babble, and claim that the mathematics provides the link that they want, even though it doesn’t. They’re trying to use the credibility and robustness of mathematics to keep their nonsense above water, even though there’s nothing remotely mathematical about it.
They keep going with the nonsense math: they claim that the key to larger anomalous effects resides in “better alignment” of the interacting seed vectors (because the closer the two vectors are, in their framework, the larger the discrepancy between their two ways of “adding” vectors); and that alignments are driven by “personal need or desire”. And it goes downhill from there.
This is really wretched stuff. To me, it’s definitely the most offensive of the PEAR papers. The other PEAR stuff I’ve seen is abused statistics from experiments. This is much more fundamental – instead of just using sampling errors to support their outcome (which is, potentially, explainable as incompetence on the part of the researchers), this is clear, deliberate, and fundamental misuse of mathematics in order to lend credibility to nonsense.
[old]: http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.html
[pear-thoth]: http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.html
[scigen]: http://pdos.csail.mit.edu/scigen/

0 thoughts on “PEAR yet again: the theory behind paranormal gibberish (repost from blogger)

  1. gk

    It’s funny. PEAR seems to be trying to quantify Kant’s analytical-synthetical/a priori-a posterori distinction. Specifically, that we have a fundamental a priori knowledge of spacetime (PEAR’s seed region). And perceiving is an active process where we “form” the observables into our spacetime. Therefore, we need both the object and spacetime together for any type of perception (their seeds).
    In addition, along with this a priori knowledge comes algebra and geometry (which Kant believes is the synthetic a priori), which PEAR is using to describe what does and does not fit into their spacetime. It’s like using a word to define itself.
    Further, Kant does write that there are things that lie beyond the region of our comprehension, i.e., beyond our spacetime, that we do recognize, namely G-d (Kant was a religious man). I equate these to PEARs mind/matter anomolies. BUT (a big BUT), as they lie beyond spacetime, they cannot be quanitfied by definition. Chumps.

    Reply
  2. Michael

    I visited PEAR once or twice. I saw those charts. So it is just BS? I thought they were affiliated with a good University.
    I know if I was running an experiment that seemed to show some kind of paranormal effects, I wouldn’t trust my results and I would try very hard to discount my initial results by devising different experiments and hoping that I get an expected result: no paranormal effects.
    So in your opinion they didn’t do this? In your opinion they have a badly designed experiment with sampling error and are calling it a real effect?

    Reply
  3. Mark C. Chu-Carroll

    Michael:
    Yes, it’s just BS. Just read the articles, look at their own descriptions of the experiments, look at their own admissions about the analysis of their data.
    They’re at a good university, yes. And you know what that means? Nothing. In research, the data is everything, and their data doesn’t stand up. And they *know* it. Take a look at their “12 year retrospective” paper. They *admit* that the data is not statistically significant – they just try to bury that admission deep enough that they hope no one will notice it.
    WRT the GCP stuff I discuss in this article, just go back to their paper and look at the method. They pick events; then they pick time windows associated with those events; then they look for samples of the data within that time window that appear to be anomalous. Once you’ve absorbed that, do a quick calculation of what the probability of identifying a time sample within a time window of the size they use that displays the degree of skew that they consider anomalous. You’ll find that it would be *surprising* if they *didn’t* find anomalies of the sort that they’re looking for. If no “anomalies” of that sort existed, t would in fact be an indication that the random source wasn’t really random.

    Reply
  4. Anonymous

    I think chomsky’s “Colorless Green ideas sleep furiously” the cannon example of proper syntax that’s meaningless is easier to give meaning to.

    Reply

Leave a Reply