rmiller wrote:


At 01:46 PM 6/3/2005, rmiller wrote:
(snip)

What do you mean by "the qualia approach"? Do you mean a sort of dualistic view of the relationship between mind and matter? From the discussion at http://www.fourmilab.ch/rpkp/rhett.html it seems that Sarfatti suggests some combination of Bohm's interpretation of QM (where particles are guided by a 'pilot wave') with the idea of adding a nonlinear term to the Schrodinger equation (contradicting the existing 'QM math', which is entirely linear), and he identifies the pilot wave with "the mind" and has some hand-wavey notion that life involves some kind of self-organizing feedback loop between the pilot wave and the configuration of particles (normally Bohm's interpretation says the configuration of particles has no effect on the pilot wave, but that's where the nonlinear term comes in I guess). Since Bohm's interpretation is wholly deterministic, I'd think Sarfatti's altered version would be too, the nonlinear term shouldn't change this.


Seems to me you've described the "qualia approach" pretty well.

But why do you call it that? It seems like it's just a philosophical add-on to interpret the pilot wave as "mind" and the particles guided by it as "matter", even if Sarfatti's nonlinear QM theory were correct, and the idea that life depends on a self-organizing feedback loop between the pilot wave and particles could get beyond the pure hand-wavey stage (both of which seem very unlikely), there'd be no obligation to interpret the pilot wave in terms of mind/qualia.




while on the other
side we have those like Roger Penrose who (I think) take a mechanical view (microtubules in the brain harbor Bose-Einstein condensates.)

Penrose's proposal has nothing to do with consciousness collapsing the wavefunction, he just proposes that when a system in superposition crosses a certain threshold of *mass* (probably the Planck mass), then it collapses automatically. The microtubule idea is more speculative, but he's just suggesting that the brain somehow takes advantage of not-yet-understood quantum gravity effects to go beyond what computers can do, but the collapse of superposed states in the brain would still be gravitationally-induced.

Penrose has a *lot* of things to say about QM---and his new book has the best description of fibre bundles I've seen in quite a while---but no, I didn't mean to suggest his entire argument was based on BECs in the microtubules. I suggested Penrose because his approach seems diametrically opposed to the qualia guys.

But you brought him up in the context of the "consciousness plays a critical role in understanding QM" idea, when Penrose doesn't fall into this camp at all.




  All this model-building (and discussion) is fine, of
course, but there are a number of psychological experiments out there that consistently return counterintuitive and heretofore unexplainable results. Among them, is Helmut Schmidt's "retro pk" experiment which consistently returns odd results. The PEAR lab at Princeton has some startling "remote viewing" results, and of course, there's Rupert Sheldrake's work. As far as I know, Sheldrake is the only one who has tried to create a model ("morphic resonance"), and most QM folks typically avoid discussing the experiments--except to deride them as nonscientific. I think it may be time to revisit some of these "ESP" experiments to see if the results are telling us something in terms of QM, i.e. decoherence. Changing our assumptions about decoherence, then applying the model to those strange experiments may clarify things.

RM

Here's a skeptical evaluation of some of the ESP experiments you mention:

http://web.archive.org/web/20040603153145/www.btinternet.com/~neuronaut/webtwo_features_psi_two.htm

Anyway, if it were possible for the mind to induce even a slight statistical bias in the probability of a bit flipping 1 or 0, then simply by picking a large enough number of trials it would be possible to very reliably insure that the majority would be the number the person was focusing on. So by doing multiple sets with some sufficiently large number N of trials in each set, it would be possible to actually send something like a 10-digit bit string (for example, if the majority of digits in the first N trials came up 1, you'd have the first digit of your 10-digit string be a 1), something which would not require a lot of tricky statistical analysis to see was very unlikely to occur by chance. If the "retro-PK" effect you mentioned was real, this could even be used to reliably send information into the past!

I spoke with Schmidt in '96. He told me that it is very unlikely that causation can be reversed, but rather that the retropk results suggest many worlds.

But that is presumably just his personal intuition, not something that's based on any experimental data (like getting a message from a possible future or alternate world, for example).



When these ESP researchers are able to do a straightforward demonstration like this, that's when I'll start taking these claims seriously, until then "extraordinary claims require extraordinary evidence".

The extraordinary claims---evidence rule is good practical guidance, but it's crummy science. Why should new results require an astronomical Z score, when "proven" results need only a Z of 1.96? Think about the poor fellow who discovered that ulcers were caused by helicobacter pylori---took him ten years for "science" to take him seriously, and then only after he drank a vial of h.pylori broth himself. Then there's the fellow at U of I (Ames) who believed that Earth is being pummeled by snowballs--as big as houses--from space. He was thoroughly derided (some demanded he be fired) for ten years or so---until a UV satellite saw the snowballs smack into the atmosphere. And not too long ago, there was the cult of anthropologists that believed the New World was populated just 12,000 yrs ago (give or take a thousand or two)--even as the evidence poured in refuting that view.

On the other side, we have guys like Harvard Epidemiologist Ken Rothmann, who claims disease clusters are not worthy of study (Keynote address: CDC "Cluster Buster" conference Atlanta, 1989). There are other examples, but to insist on some ridiculously high standard of proof for "new" results---extraordinary or otherwise, is merely being conservative to the point of inertia. In Jefferson's time, no reputable scientist believed rocks fell from the sky---they would believe it (maybe) only when one fell into their outstretched hand. The odds against that happening by chance would involve a Z score of about 20 or so.

Those who demand extraordinary evidence should have the courage of their convictions---and announce their criteria for belief in terms of a Z score---or probability. For your convenience, here's a table (calculated using Systat 10; n>500). If you want the entire table, you can find it here <<http://www.amazon.com/exec/obidos/tg/detail/-/1881043193/qid=1117826236/sr=8-4/ref=sr_8_xs_ap_i4_xgl14/002-7145871-9444831?v=glance&s=books&n=507846>>


The issue is not the Z score in isolation, it's 1) whether we trust that the correct statistical analysis has been done on the data to obtain that Z score (whether reporting bias has been eliminated, for example)--that's why I suggested the test of trying to transmit a 10-digit number using ESP, which would be a lot more transparent--and 2) whether we trust that the possibility of cheating has been kept small enough, which as the article I linked to suggested, may not have been met in the PEAR results:




"Suspicions have hardened as sceptics have looked more closely at the fine detail of Jahn's results. Attention has focused on the fact that one of the experimental subjects - believed actually to be a member of the PEAR lab staff - is almost single-handedly responsible for the significant results of the studies. It was noted as long ago as 1985, in a report to the US Army by a fellow parapsychologist, John Palmer of Durham University, North Carolina, that one subject - known as operator 10 - was by far the best performer. This trend has continued. On the most recently available figures, operator 10 has been involved in 15 percent of the 14 million trials yet contributed a full half of the total excess hits. If this person's figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results.

"Sceptics like James Alcock and Ray Hyman say naturally it is a serious concern that PEAR lab staff have been acting as guinea pigs in their own experiments. But it becomes positively alarming if one of the staff - with intimate knowledge of the data recording and processing procedures - is getting most of the hits.

"Adding fuel to the controversy, sceptics have pointed to the strange behaviour of the baseline condition results. Theoretically, the baseline condition should show the same gently wandering pattern as the calibration trials with occasional excursions into areas of apparent significance. Given the number of baseline trial that have been run, the scoring should have broken through this statistical envelope at least half a dozen times by now. Instead, the baseline result has stuck unnaturally close to a zero deviation from chance.

"In noting these results, Jahn himself has remarked that what makes the situation even odder is that when the baseline statistics and the high and low scores are all added together, the abnormally wide variance of one nicely cancels out the abnormally narrow variance of the other to create a well-behaved Gaussian distribution. It is almost as if the extra hits found in the high and low scores had been taken from what would otherwise have been outliers of the baseline condition.

"Alcock says this is exactly the sort of pattern that might be expected if some sort of data sorting had been going on. If just a handful of extreme baseline trials had been wrongly identified as high or low trials - or, alternatively, a few middling high and low trials had been reassigned to the baseline pool - then it would be easy to create an apparently significant result. Given an effect size of just one in a thousand, it would not take many such swaps to distort Jahn's results. "




Of course, both these concerns would be present in any statistical test, even one involving something like the causes of ulcers like in the quote you posted above, but here I would use a Bayesian approach and say that we should start out with some set of prior probabilities, then update them based on the data. Let's say that in both the tests for ulcer causes and the tests for ESP our estimate of the prior probability for either flawed statistical analysis or cheating on the part of the experimenters is about the same. But based on what we currently know about the way the world works, I'd say the prior probability of ESP existing should be far, far lower than the prior probability that ulcers are caused by bacteria. It would be extremely difficult to integrate ESP into what we currently know about the laws of physics and neurobiology. If someone can propose a reasonable theory of how it could work without throwing everything else we know out the window, then that could cause us to revise these priors and see ESP as less of an "extraordinary claim", but I don't know of any good proposals (Sarfatti's seems totally vague on the precise nature of the feedback loop between the pilot wave and particles, for example, and on how this would relate to ESP phenomena...if he could provide a mathematical model or simulation showing how a simple brain-like system could influence the outcome of random quantum events in the context of his theory, then it'd be a different story).

Jesse


Reply via email to