RE: A paper by Bas C. van Fraassen

2010-10-23 Thread rmiller
Good article and, as I see it, a barely-concealed challenge to actually come
up with an experiment that will prove or disprove MWI.  I’ve seen a few on
the Los Alamos site from time to time, but nothing that wraps it up.  And
Young’s experiment shouldn’t count.

 

From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Colin Hales
Sent: Saturday, October 23, 2010 4:37 PM
To: everything-list@googlegroups.com
Subject: Re: A paper by Bas C. van Fraassen

 

I am pretty sure that there is a profound misinterpretation and/or
unrecognized presupposition deeply embedded in the kinds of discussion of
which Van F and your reply and Bruno's  fits.  It's so embedded that  there
appears to be no way that respondents can type words from a perspective in
which the offered view may be wrong or a sidebar in a bigger but
unrecognised picture. It's very hard to write anything to combat view X when
the only words which ever get written are those presuming X, and X is
assuming a position of explaining everything, yet doesn't.

In the long run I predict that:

1) The 'many worlds' do not exist and are a product of presuppositions about
scientific description not yet understood by the proponents of MWI.
2) QM will be recognized as merely an appearance of the world, not the world
as it is.
3) The universe that exists now is.the only universe that exists at the
moment. Despite this, the many worlds are explorable, physically by
'virtual matter' behaving as if they existed (by an appropriate entity  made
of the stuff of our single universe)
4) The MWI has arisen as a result of a human need to make certain
mathematics right, not the need to explain the natural world. This, in the
longer term will be recognised as a form of religiosity which will be seen
to imbue the physicists of this era, who are preselected by the education
system for prowess in manupulating symbols. The difference between this
behaviour and explaining the natural world is not understood by the
physicists/mathematicians of this era.
(In contrast, I regard myself as a scientist  an explainer of
things-natural ...which I claim as different to being a
physicists/mathematician in this strange era we inhabit)
5) COMP is false a computer instantiation of rules of how a world
appears to be, and a world are not the same thing.
6) COMP is false a computer instantiation of rules of how a brain
appears to be is not a brain.
7) Corollary: scientific description of how the world appears and what the
world is made of are not the same description _and_ computer instantiations
of either set is not a world.
8) The issue that causes scientific descriptions (like QM) to be confused
with actual reality is a cultural problem in science, not a technical
problem with what science has/has not discovered.
9) That most of the readers of this list will stare at this list of
statements and be as mystified about how I can possibly think they are right
as I am about those readers' view that they can't be right.

BTW I have a paper coming out in Jan 2011 in 'Journal of Machine
Consciousness' in which I think I may have proved COMP false as a 'law of
nature' ... here in this universe, (or any _actual_ universe, really). At
the least I think the argument is very closeand I have provided the
toolkit for its final demise, which someone else might use to clinch the
deal.

This leads to my final observation:

10) I think the realization of the difference between 'wild-type'
computation (actual  natural entities interacting) and 'artificial
computation' (a computer made of the actual entities interacting, waving its
components around in accordance with rules /symbols defined by a third
party) will become mainstream in the long run. 
-
It's quite possible that the COMP of the Bruno kind is actually right , but
presented into the wrong epistemic domain and not understood as such. Time
will tell. The way the Bruno-style' COMP can be right is for it to make
testable predictions of the outward appearance of the mechanism for delivery
of phenomenal consciousness in brain material 

NC (natural computation) and AC (artificial computation) is the crucial
distinction. I don't think the QM/MWI proponent can conceive of that
distinction. Perhaps it might be helpful if those readers try and conceive
of such a situation, just as an exercise.. 

cheers
colin hales





Bruno Marchal wrote: 

HI Stephen, 

 

Just a short reply to your post to Colin, and indirectly to your last posts.

 

 

On 22 Oct 2010, at 10:53, Stephen Paul King wrote:





Dear Colin,

 

Let me put you are ease, van Fraassen has sympathies with the
frustrations that you have mentioned here and I share them as well, but
let's look closely at the point that you make here as I think that it does
to the heart of several problems related to the notion of an observer.
OTOH, it seems to me that you are suggesting that the objective view is just
a form of consensus 

RE: Many-worlds vs. Many-Minds

2010-02-22 Thread rmiller


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Charles
Sent: Monday, February 22, 2010 2:20 PM
To: Everything List
Subject: Re: Many-worlds vs. Many-Minds

On Feb 22, 8:12 pm, rmiller rmil...@legis.com wrote:
 From: everything-list@googlegroups.com
 [mailto:everything-l...@googlegroups.com] On Behalf Of Jason Resch
 Sent: Sunday, February 21, 2010 11:38 PM
 To: everything-list@googlegroups.com
 Subject: Re: Many-worlds vs. Many-Minds


Huw Price suggests that our view of causality is strongly influenced
by the way we're embedded / oriented in space-time. He points out in
Time's Arrow and Archimedes' Point that the laws of physics are
almost entirely time-symmetric, with the result that (for example) you
can't tell which way up a Feynman diagram is - either time-orientation
is equally valid. 

Perhaps, but it seems to me that thermodynamics and entropy are the critical
factors.

If we accept what the laws of physics appear to say,
that nature is for the most part indifferent to the direction of time,
this implies that quite a few things are a lot less strange than we
think. Delayed-choice and ERP experiments become trivial to explain,
for example, once we stop thinking of the particles involved as
similar to macroscopic objects with a clear arrow of time, and assume
their state is equally constrained by past and future boundary
conditions (e.g. the emitter and detector). This view is similar to
Cramer's Transactional Interpretation and Wheeler-Feynman Absorber
Theory, but makes them both look unnecessarily complicated, since it
doesn't require any new physics, it merely suggests we take the
existing physics at face value (as Hugh Everett III once did, with
similarly interesting results).

Agree in part. It seems as though the same processes that result in the
laws of thermodynamics/entropy may operate similarly across MW.


Price's view allows us to focus on the real mystery of time, which is
not why it appears to flow in one direction, but why the region of
space-time near the Big Bang was in a state of very low entropy. I
have a suspicion that the answer is something to do with the shape of
space-time (but I haven't yet been able to get my head around how this
connects with breaking eggs and melting ice...) Admittedly that only
pushes the why back a step but that is still progress: rather than
attempting to explain a non-existent preference for one time direction
that we thought was embedded somehow in the laws of physics, we now
need to explain why the universe has a particular boundary condition.
(Possibly Tegmark's MUH comes in here?)

Max Tegmark is one of the big names in this--for good reason. But the guys
who may have first opened the hatch were Univ. of Ariz astronomer Bill Tifft
 http://en.wikipedia.org/wiki/William_G._Tifft who discovered evidence
for redshift quantization, and Helsinki physicist Ari Lehto who first
proposed the concept of 3D time. I think we'll look back on their work as
seminal and as far-reaching as the Hunter College guy who (in 1972) first
proposed that Big Bang started from a vacuum fluctuation zero event. 



Helmut Schmidt's experiments appear to (purportedly) involve
psychokinesis; I have a feeling that I've read various attempts to
debunk these claims in the Skeptical Enquirer but unfortunately my
subscription lapsed some years ago, and I can't recall the details.

Schmidt took a lot of heat for his tendency to frame the experiment in the
worst possible terms. But unlike many others, his experiments can--and
have-- been replicated. Problem is, no one is sure what it means to
influence the outcome of an experiment after the fact. 

 It does sound like an extraordinary claim that requires extraordinary
evidence to back it up. The website I looked at was a mass of
statistics that I didn't really follow, unfortunately.

My own rules of thumb: 
1. Follow Fischer: if it's p0.05 (chance of random is 1 in 20) then it's
good. And.
2. Avoid meta-analysis.


 As for the role of consciousness in all of this, I believe some answers
have
 already been found-back in 1978 when Stanford Clinical Psychologist Ernest
 R. Hilgard discovered the Hidden Observer phenomenon.  Seems there's an
 executive function in each of us that comes to the fore only under
 extremely deep (60+) hypnosis.  His book on the subject, Divided
 Consciousness is fascinating reading.  Someone familiar with Many Worlds
 theory will come away with the impression that there evolved as a
mechanism
 to keep track of the local many-world space we inhabit.

This is a facinating idea, although Hidden Observer theory is still
contraversial (since the experiments involved deep hypnosis,
presumably the results may have been the result of suggestion by the
experimenters?).

There's always that possibility, but much of this apparently has been
double-blinded.
If you can find a Finnish translator, I suggest you look into the work by
the (rather

RE: Many-worlds vs. Many-Minds

2010-02-22 Thread rmiller


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Charles
Sent: Monday, February 22, 2010 11:43 PM
To: Everything List
Subject: Re: Many-worlds vs. Many-Minds


 Good point, but among the many fates there is always the optimal path.
 Perhaps evolution resulted in a mechanism able to visualize all of the
 possible (MW) paths and choose the most advantageous one? There's
certainly
 enough evidence to suggest that in moments of crisis, some of us are
 afforded advice from an elevated perspective. Maybe what some describe
as
 guardian angels are merely our hidden observers, directing us in a path
 through the multiworlds?  Unfortunately, given the walls between physics,
 philosophy and psychology--it's unlikely that we're going to see any
 unifying theories any time soon.

This is something I'd really like to believe! (I'm trying to write a
story which is based on this sort of premise, as it happens :-) A
colleague of mine in a previous job believed he'd had experiences that
illustrated this principle, and he certainly sounded convincing,
although only anecdotal of course. I certainly think we still have a
lot to learn about the mind and consciousness (always assuming it's
possible to do so).

I think there is the possibility that one can experimentally test whether
consciousness includes links between the real world and the possible
parallel ones: set up a double-blind experiment where 100 subjects are given
5 tries to predict the appearance of any of 20 possible figures. However,
the machine is rigged to show only (say) ten--the rest are actually
impossible to show (and there's no repeat.) Run the test, then score how
many predicted the possible figures vs how many predicted those figures
that weren't possible. The hypothesis: the predicted possible: scores will
dominate over the impossible ones--thus suggesting that knowledge a
knowledge of worlds where the potential objects showed up on the screen.

RM


-- 
You received this message because you are subscribed to the Google Groups
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: Many-worlds vs. Many-Minds

2010-02-21 Thread rmiller
To me, the Many-Minds interpretation requires significant changes in frames
of reference.  Suppose you view a particular world out of many as a
2-dimensional surface.  Layers of surfaces comprise the local environment of
a particular section of Many Worlds.  Now think of a behavior pattern as a
set of elements and interactions between elements.  Each of the many-worlds
is associated with a snapshot of your individual behavior pattern unique
to that world.  But suppose there are similarities between your behavior
patterns in worlds A B and C-that set of similar configurations forms what
can be described as a fibre bundle through multiple surfaces.  If so, this
may suggest that at some level consciousness experiences more than one world
surface at a time.   If the ratio of interactions to elements decreases
(you enter a darkened room) then the similarities in the behavior system
config should result in an increase in the depth of the many world
surfaces.  Increase the ratio of interactions to elements and the complexity
of your behavior set increases-linking you to a particular world surface.
It would seem that, like relativity, the frame of reference is not
absolute-and in fact changes as rapidly as perception changes.   From others
inhabiting the single world surface, it would appear that the behavior
system is changing without cause; but if we could somehow view the entire
group of world surfaces associated with the core group of a particular
behavioral system configuration, then we would be more likely to understand
the reasons for the behavior.  Unfortunately, any single nervous system
has any number of configurations associated with multiple world layers---and
anyone attempting to perceive it has their own particular sets of
configurations (and world layers.)  The best we can do is arrive at a
general consensus of what is perceived and agree to label that the local
shared reality.  The Copenhagen theorists infamously suggested that nothing
exists unless it is perceived (measured)-and as far as it goes, that would
be absolutely true.  One cannot perceive what doesn't exist in that world
layer.  But if the perception process naturally involved multiple world
layers, then the Copenhagen Interpretation would be true, but trivially so
(as Hawking said about Many Worlds.)   David Deutsch claims we all inhabit
multiple worlds, but can't communicate between the worlds.  I think Many
Minds, Fibre Bundle topology, and Neodissociationist (Hilgardian) psychology
will prove him wrong.

 

RM 

 

 

From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Jason Resch
Sent: Sunday, February 21, 2010 6:28 PM
To: Everything List
Subject: Many-worlds vs. Many-Minds

 

On the many-worlds FAQ:
http://www.anthropic-principle.com/preprints/manyworlds.html

It states that many-worlds implies that worlds split rather than multiple,
identical, pre-existing worlds differentiate:

Q19 Do worlds differentiate or split?
-
Can we regard the separate worlds that result from a measurement-like
interaction (See What is a measurement?) as having previous existed
distinctly and merely differentiated, rather than the interaction as
having split one world into many? This is definitely not permissable
in many-worlds or any theory of quantum theory consistent with
experiment. Worlds do not exist in a quantum superposition
independently of each other before they decohere or split. The
splitting is a physical process, grounded in the dynamical evolution of
the wave vector, not a matter of philosophical, linguistic or mental
convenience (see Why do worlds split? and When do worlds split?) 
If you try to treat the worlds as pre-existing and separate then the
maths and probabilistic behaviour all comes out wrong.

However, just below, in the Many-minds question:

Q20 What is many-minds?
--
Many-minds proposes, as an extra fundamental axiom, that an infinity of
separate minds or mental states be associated with each single brain
state. When the single physical brain state is split into a quantum
superposition by a measurement (See What is a measurement?) the
associated infinity of minds are thought of as differentiating rather
than splitting. The motivation for this brain-mind dichotomy seems
purely to avoid talk of minds splitting and talk instead about the
differentiation of pre-existing separate mental states.


Based on the answers provided in this FAQ, it sounds as though many-minds
permits differentiation of pre-existing observers whereas many-worlds does
not permit differentiation.  The many-minds interpretation also sounds much
more similar to computationalism as described by Bruno.  Computationalism +
arithmetical realism supposes that all possible computations exist, and
yield all possible observers.  Therefore, the consciousness of these
observers would differentiate, rather than split, since they all existed
beforehand.  What are others thoughts on 

RE: Why I am I?

2009-12-05 Thread rmiller
 

 

From: John Mikes [mailto:jami...@gmail.com] 
Sent: Saturday, December 05, 2009 10:00 AM
To: everything-list@googlegroups.com
Subject: Re: Why I am I?

 

I admire this list.

 

Somebody asks a silly question and 'we' write hourlong wisdom(s) upon it.
After my deep liking of Stathis's what difference does it make? (or
something to that meaning) - 

my question went a step deeped:

for: How do I know I am I? - (rather: How (Why?) do I think I am I?) 

I ask:  DO I?  (then comes Stathis).  

*

Bruno's 'firmly knowable' arithmetic truth is a true exception: WE (=the
ways humans think) made up what we call 'arithmetic' - the way that WE may
accept it as 'truth'. 

(I am still with David Bohm's numbers are human  invention - did not read
acceptable (for me) arguments on the numbers-originated everything - in the
wider sense. But this is  not this thread).

 

John Mikes 

 

PS now - it seems - I joined the choir. JM


All. . .

Good quote on hourlong wisdoms.  But it's also starting to look like a
lead-in to a documentary on pop songs with a philosophic bent.  The who am
I thing probably applies to a good number of teen songs today, and to a few
of them back in the 70's.  Matter of fact, there seems to be a 30-40-year
cycle to who am I? and philosophycentered songs, with a few of them
turning up in the thirties.  What a difference a day makes, night and
day, Days of Future Passed, etc. and etc.

 

No WONDER John joined the choir.  Heh.

 

R. Miller

 

 


 

On Sat, Dec 5, 2009 at 9:07 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 

On 05 Dec 2009, at 01:30, Brent Meeker wrote:

 

 


It is also infinitely ignorant and so long as it remains that way it's 
nothing to me. 

 

We are all infinitely ignorant (if only with respect to arithmetical truth).

The universal machine or numbers are not nothing.

 





 This is just another form of the everything universal 
acid.  Just postulate an everything and then we know the something we're 
interested in must be in there somewhere.

 

The everything of comp is just elementary arithmetic. 

It predicts the existence of a a level (of isolation or independence,
really) such that many computations interferes, as QM confirms
(retrospectively). It predicts symmetry and a quantum logic of conditionals,
etc.

 

And a cute arithmetical, and testable, interpretation of
Phytagoras-Plato-Plotinus, + a vast range of mystics and free thinkers.

 

I ditinctly and clearly not follow Tegmark or Bayesian Anthropism on this
point. The physical *laws* have a reason, and we can find them from the
digital hypothesis.

 

Frankly, Monsieur est difficile ;-)

 









It is not necessary for the reasoning, but there are sequence of  

thought experiences which can help you to figure out what is it like  

losing all memories. 


I wasn't talking about losing all memories, but about not having 
memory, i.e. not only losing old memories, but also not forming any new 
memories.  A computer without memory can't compute.

 

The computer, or the relative universal machine (relative to another
probable universal machine) makes only higher the relative probabilty that
the internal consciousness flux will makes itself manifest relatively to
that probable universal machine/number.

It makes possible for a universal machine to say hello to itself, or to
another universal machine.

 

 

 

 

Some would say that the point consists in losing, for a short period,  

that human kind of consciousness.

 


But without memory how would one know it had been lost or not?

 

 

That is again the point. There we don't know that.

 

But with salvia divinorum, when you control well the dosage and timing, or
smoke only the leaves, you don't need to do the amnesia, you can just
dissociate that universal you from your contingent terrestrial you, like
taking a big distance from the contingencies. It is a desappropriation.

 

 

To judge the presence of consciousness is difficult. Recently, in  

France, after having been considered as being in a unconscious  

comatose state for 23 years, a woman, with the help of her family,   

has succeed to convince its doctors that she was as conscious than you  

and me. She was just highly paralyzed.

 


You mean Rom Houben (a man)?

http://article.wn.com/view/2009/11/25/Is_coma_man_Rom_Houben_REALLY_talking_
Mystery_as_critics_sla/

 

 

Well, not really. It was a french woman. In Belgium they have considered her
as fully conscious, and it has been confirmed in the USA. I heard this on a
radio, and a friend confirms. I will try to find the information. In any
case I allude to the case, by decision, where the consciousness is not
considered as controversial. Like the Ingberg case in France.  Usually, it
means, I think, that the patient can communicate through different speech
therapists. 

 

From the video, I would say Houben seems fully conscious to me.

 

 

 






Experts are casting doubt on claims that a man http://everyman.com/ 
who doctors 

Re: language, cloning and thought experiments

2009-03-06 Thread rmiller
At 07:31 AM 3/6/2009, Stathis Papaioannou wrote:

2009/3/6 Jack Mallah jackmal...@yahoo.com wrote:

  If you're not worried about the fair trade, 
 then to be consistent you shouldn't be worried 
 about the unfair trade either. In the fair 
 trade, one version of you A disappears 
 overnight, and a new version of you B is 
 created elsewhere in the morning. The unfair 
 trade is the same, except that there is an 
 extra version of you A' which disappears 
 overnight. Now why should the *addition* of 
 another version make you nervous when you wouldn't have been nervous 
 otherwise?
 
  It's not the addition of the other copy 
 that's the problem; it's the loss of it. Â Losing people is bad.

How would the addition then loss of the extra copy be bad for the
original, or for that matter for the disappearing extra copy, given
that neither copy has any greater claim to being resurrected in the
morning as B?

  That Riker's measure increased is not the 
 important thing here: it is that the two Rikers 
 differentiated. Killing one of them after they 
 had differentiated would be wrong, but killing 
 one of them before they had differentiated would be OK.
 
  That would be equivalent to U = Sum_i Q_i in 
 which no changes in the wavefunction matter at 
 all, since M_i  0 for all i no matter what. Â 
 I don't think you thought that one through.

I don't agree with the way you calculate utility at all. If I got $5
every time I pressed a button which decreased my absolute measure in
the multiverse a millionfold I would happily press the button all day.
It would be easy money and I'd feel exactly the same afterwards, just
$5 richer. On the other hand, if pressing the button decreased the
measure of those versions of me having good experiences by 1% relative
to the versions of me having bad experiences, then I wouldn't press
it, and certainly not repeatedly.


--
Stathis Papaioannou


I've been following this discussion and have a 
comment re absolute measure in the 
multiverse.  The assumption is the same one David 
Deutsch has expressed: other than the 
interference observed in the Young's experiment, 
there can be no contact between the multiverses.

However, suppose our consciousness was 
essentially a topological object---a fibre bundle 
through a manifold of similar universes?  The 
universes where things are remarkably different 
would be ignored by the observer in favor of the 
probabilistic picture of reality associated 
with the median experience bundle.  Focusing on 
the volume section of such a distribution might 
be the function of an entity such as Hilgard's 
hidden observer http://en.wikipedia.org/wiki/Ernest_Hilgard.

In this model, the platform for consciousness is 
simply a manifold formed by equivalent behavioral 
elements across the multiverse (no pun 
intended.)  Eliminating them one by one would 
result in a commensurate decrease in overall consciousness.

Richard Miller




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Probability

2008-11-06 Thread rmiller

At 10:54 AM 11/6/2008, Bruno Marchal wrote:


On 06 Nov 2008, at 02:37, Thomas Laursen wrote:

 
  Hi everyone, I am a complete layman but still got the illusion that
  maybe one day I would be able to understand the probability part of MW
  if explained in a simple way. I know it's the most controversal part
  of MW and that there are several competing understandings of
  probability in MW, but still: none of them make sense to me! If every
  line of history is realized then how can any line of history be more
  probable than any other?

Wolf's answer is probably correct, but certainly incomplete. If you
take QM (without collapse) norma distribution and measure can be
extracted from Gleason theorem. Born rule can be deduce from first
person indeterminacy or more politically correct variant through
decison theory (like Deutsch and Wallace). It is a whole field. My
point in this list consists to show that if you assume the mechanist
thesis (like Everett) then even if Deutsch proposal works it is not
enough to justify the probabilities. There is a big work which remains
to be done, but it has the advantage of taking into account the non
communicable part of the experiments (usually known as the
experience). But there are more abherant histories to evacuate (like
infinities in field theories).

Anna Wolf's answer can be wrong in case physics is eventually purely
discrete, in which case probabilties should arise from pure relative
proportion based on dircrete relative partitioning of the multiverse.
I think the comp hyp excludes this though, like I think M theory, as
far as I grasp something there, too. Loop gravity, if literally true,
could lead to such ultimate discretization or provide models.

For each position of an electron in your brain there is a (quantum)
computational history going through that state, and probabilities are
eventually all related self-indiscernibility relations (if it is
english).

Bruno Marchal

http://iridia.ulb.ac.be/~marchal/


First of all, Bruno, that answer seemed Palenesque in the extreme, 
even for someone whose job it is to know this stuff.  The 
correspondent indicated his was a layman's perspective.  How about 
another go at it without shortcut references to Born, David Deutsch, 
Wallace (who?) et al.  As a firm believer in the adage that one who 
really knows the subject should be able to explain it in such a way 
that a bright ten-year-old can understand the concept.

R Miller









--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Technical paper on 3-dimensional time

2006-01-26 Thread rmiller

At 01:23 PM 1/23/2006, Johnathan Corgan wrote:

Marc Geddes wrote:
 This is very recent (late 2005):

 http://arxiv.org/abs/quant-ph/0510010

I've read this and the author's prior two papers on multi-dimensional time.

(snip)

All,
Finnish physicist Ari Lehto wrote about 3D time way back in 
1990.  Used it while researching my sci fi novel Dreamer.  You can 
download Ari's paper 
here---http://psroc.phys.ntu.edu.tw/cjp/v28/215.pdf.  If memory 
serves, it was also published in a Spanish physics journal (Madrid).


R. Miller 





Re: Lobian Machine

2005-12-29 Thread rmiller

At 10:33 PM 12/29/2005, George Levy wrote:

Bruno Marchal wrote:

Godel's result, known as Godel's second incompleteness theorem,  is 
that no consistent machine can prove its own consistency:


IF M is consistent then M cannot prove its consistency



Bruno,

After I read your email, we had a gathering of family and friends, 
and my head being full of the subject of this post. I wanted to test 
the idea of Godel's second incompleteness theorem on the average 
people just to see how they would respond. I found the right place 
in the discussion to insert the paraphrase:


If I am sane, it is impossible to know for sure that I am sane.

This povoked some hilarity, especially with my kids (young adults) 
who probably view me as some kind of nutty professor. While this 
statement is mathematically true, it was not considered serious by 
the people I was talking with. I guess that the average human has no 
doubt about his own sanity.(But my kids had some doubts about mine) 
One way to prove that you are crazy is to assert that you are sane. 
This means that the average human is crazy! :-)


George

Hm. . .


Godel was discussing sharply defined mathematical constructs, 
specifically, proof of N requires knowledge of non-N. As I'm sure you 
know, sanity is a *legal*, rather than a mathematical term.  While 
this sort of logical fuzziness is probably in keeping with these 
times, I doubt if it really applies to Godel's theorem.



RMiller 





Re: contention: theories are incompatible

2005-11-16 Thread rmiller

At 10:14 PM 11/16/2005, James N Rose wrote:

An open hypothesis to list members:

Conservation as a 'fundamental rule of condition'
is incompatible and antithetical with any notions
of many worlds.

Either explicitly excludes and precludes the other;
can't have both and retain a consistent existentialism.

J Rose




I haven't kept up with this thread or that idea, but there is no logical 
reason that a particular attribute such as conservation should be 
universal across a many-world manifold.  First of all, conservation is 
ill-defined, but if precisely defined assumes a standard, which implies a 
teleological approach.  And that is one step away from 
scholasticism.  Before you know it, you're quoting Plato.  Mathematically, 
conservation could be defined in terms of least-distance between points, 
but if the individual worlds are constructed with their own unique 
space-time topology (sort of by definition--otherwise each world would be 
the same as the next one) then the term conservation would apply only 
locally.  So, strike two.  In fact, one could describe each world as a 
unique slice intersecting and *forming* the surface of the many-world 
manifold---and each slice could be characterized by its own unique 
matrix.  Postulating the individual world matrix as a set of elements and 
interactions between elements, one could arrive at an ideal (Plato 
again!) in which each individual world is confined to a minimum number of 
elements/interactions.  Fine.  But it would result in each world being 
congruent (homologous) to every other world.  The result would be no 
difference between worlds, but there is not a shred of evidence that the 
configuration works that way at all levels.  For example, you coffee may 
have cooled according to the observations setting forth the laws of 
thermodynamics---and thus predictable, but you sir, probably drove your 
automobile in a very inefficient manner today, going places that you 
shouldn't have gone (you didn't know the queue would be so long, or the 
store would be closed, etc).  Now, if you had known that the store would be 
closed, etc, you would have been a little more efficient, but that would 
require a prescience that you presumably don't have.   Maybe that's why, we 
can never precisely predict where the electron will be, because to do so 
would identify it's proper place---and from there we could then define 
it's ideal position.  That we cannot (as yet) do that suggests that this 
inability to do so is an inherent part of a dynamic system---and is present 
within all intersects of the many world manifold.


Short answer: Conservatism is a procedure that produces mental constructs 
of what we thing the world is trying to become.  It allows us to fit our 
observations against the image in our minds, but it has its 
limitations.  There is no perfect river.  Or snowstorm.  Or 
politician.   It's all in our minds. 





3D Time

2005-07-18 Thread rmiller

All,
You may find this 
interesting.http://72.14.207.104/search?q=cache:eVk8dYC9J44J:psroc.phys.ntu.edu.tw/cjp/v28/215.pdf+Lehto+physics+timehl=en


Back in the early 1990s I corresponded with astronomer William Tifft at the 
U of Ariz. (Flagstaff).  Seems he had possibly found evidence of 
quantization of the red shift.  He put me in touch with physicist Ari Lehto 
who had proposed a theory of dimensional-binding which included the concept 
of 3D time.  I may even still have a copy of his paper.  At time time, he 
was with the Univ of Oulu in Finland, but later transferred to a state 
university somewhere.  Bill Tifft is now Emeritus Professor of Astronomy at 
the U of A. 
http://www.worldandi.com/specialreport/1997/March/Sa16142.htm.  If 
anyone is truly interested in experimental observations that suggest 3D 
time, Ari Lehto and Bill Tifft are the guys to go to.


R. Miller




Re: 3D Time

2005-07-18 Thread rmiller

At 10:04 PM 7/18/2005, rmiller wrote:

All,
You may find this 
interesting.http://72.14.207.104/search?q=cache:eVk8dYC9J44J:psroc.phys.ntu.edu.tw/cjp/v28/215.pdf+Lehto+physics+timehl=en


Back in the early 1990s I corresponded with astronomer William Tifft at 
the U of Ariz. (Flagstaff).  Seems he had possibly found evidence of 
quantization of the red shift.  He put me in touch with physicist Ari 
Lehto who had proposed a theory of dimensional-binding which included the 
concept of 3D time.  I may even still have a copy of his paper.  At the 
time, he was with the Univ of Oulu in Finland, but later transferred to a 
state university somewhere.  Bill Tifft is now Emeritus Professor of 
Astronomy at the U of A. 
http://www.worldandi.com/specialreport/1997/March/Sa16142.htm.  If 
anyone is truly interested in experimental observations that suggest 3D 
time, Ari Lehto and Bill Tifft are the guys to go to.


R. Miller


little edit work there. . .it should have been at THE time not at time 
time.  Just washed my hands and I can't do a thing with 'em. . .



RM





Re: Witnesses, Observer Moments and Memories of a Past

2005-06-28 Thread rmiller

At 10:31 PM 6/28/2005, Stephen Paul King wrote:

Dear Lee,

   Are you familiar with any of the experiments that have been performed 
regarding quantum counterfactuals or null measurements? It turns out 
that the fact that some particular measure *was not made* counts just as 
much, and thus affects the results of a measurement, of an actual 
measurement that was made. Thus information of any occurrence or 
non-occurrence of a measure of a QM system, coded in an OM, will make a 
difference that can not be hand waved away.

This is why I am introducing the notion of a witness.

   Interleaving...
RM: I assume this is not associated with Feynman's all possible histories 
approach?






Re: Have all possible events occurred?

2005-06-26 Thread rmiller

At 10:22 AM 6/26/2005, Norman Samish wrote:


Stathis Papaioannou writes:  Of course you are right: there is no way to
distinguish the original from the copy, given that the copying process works
as intended. And if you believe that everything possible exists, then there
will always be at least one version of you who will definitely experience
whatever outcome you are leaving to chance.  Probability is just a first
person experience of a universe which is in fact completely deterministic,
because we cannot access the parallel worlds where our copies live, and
because even if we could, we can only experience being one person at a time.
RM Comments: (1) I'll have to disagree with Stathis' (apparent) statement 
that probability is just a first person experience of a universe.
No proper foundation.  (2) Additionally, Stathis assumes that we cannot 
access the parallel worlds where our copies live.  Since no one
can even define consciousness, or isolate precisely where memory is located 
(or even what it is), there is no way we can preclude simultaneous
experience.  The best we can say is, we simply don't know.  And, (3), for 
the same reasons, we cannot say that we experience being one person
at a time.  There are numerous psychological models---neodissociationism 
being just one---that posit a personality made up of multiple modules, all 
interacting (somewhat) under the guidance of an executive, Hilgard's 
hidden observer.  Unless and until we fully understand how consciousness 
is linked to personality, we probably shouldn't preclude multiple or 
simultaneous experience. 





Re: Hilgard's hidden observer

2005-06-26 Thread rmiller

At 03:44 PM 6/26/2005, Stephen Paul King wrote:

Dear Richard,

   Let me follow up on your suggestion: Assuming a personality is made 
up of multiple modules,does it necessarily follow that a hidden 
observer exist as a seperate entiry, or could it be that the usual 
single personality results from an entrainment (the modules become like 
oscillators that couple to each other) over the many modules?


Hilgard asked the entity that question more than a few times.  The hidden 
observer came across as quite normal-sounding. reasonable and real.  A 
Finnish psychologist by the name of Reima Kampmann made an extensive study 
of the phenomenon, but unfortunately published little--and what he did 
publish was never translated to any languages other than Finnish.  Bottom 
line: The hidden observer seems to be as real as such entities can be--or 
perhaps as real as some of the better business CEOs.  Certainly better than 
some of the former CEOs in the news lately.  Otherwise, it appears that the 
hidden observer phenom has not been studied in depth.  I haven't seen much 
published research.


   This idea predicts that if this entrainment mode is unstable and there 
are other possible metastale entrainment modes possible, then the 
personality that emerges is unstable; we get the symptons of 
multiple-personality disorder that makes personalities analogous to the 
metastable (phase space) orbits of a chaotic system.
   If no stable or metastable entrainments between the multiple modules 
obtain, we have the symptoms of autism. No?


Autism supposedly has been associated with structural changes based upon CT 
cans.  Beyond that I don't know enough about autism to comment.   Ornstein 
suggests that multiple-personalities are rather normal.  On the other hand, 
there are some great books out there about this complex and weird 
phenom.  For those who think the brain is just a complex radio set, 
multiple personality disorder can be thought of as merely having a crummy 
tuner (coil?) or a bad antenna.  Melvin Morse, a Seattle pediatrician 
suggested that there is an antenna of a sort--and it's located in the right 
temporal sulcus.   According to his books, this area also serves as some 
sort of ejection seat for the soul.  I wrote a novel a few years ago that 
hypothesized a specific EEG signal emanating from this area (resolved using 
a standard Fast Fourier Transform circuit.)  By monitoring the wavelet 
coming from this area, one could determine the time of exit for an OOBE.


Rich M



Kindest regards,

Stephen

- Original Message - From: rmiller [EMAIL PROTECTED]
To: Norman Samish [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Sunday, June 26, 2005 3:58 PM
Subject: Re: Have all possible events occurred?



At 10:22 AM 6/26/2005, Norman Samish wrote:


Stathis Papaioannou writes:  Of course you are right: there is no way to
distinguish the original from the copy, given that the copying process works
as intended. And if you believe that everything possible exists, then there
will always be at least one version of you who will definitely experience
whatever outcome you are leaving to chance.  Probability is just a first
person experience of a universe which is in fact completely deterministic,
because we cannot access the parallel worlds where our copies live, and
because even if we could, we can only experience being one person at a time.
RM Comments: (1) I'll have to disagree with Stathis' (apparent) statement 
that probability is just a first person experience of a universe.
No proper foundation.  (2) Additionally, Stathis assumes that we cannot 
access the parallel worlds where our copies live.  Since no one
can even define consciousness, or isolate precisely where memory is 
located (or even what it is), there is no way we can preclude simultaneous
experience.  The best we can say is, we simply don't know.  And, (3), 
for the same reasons, we cannot say that we experience being one person
at a time.  There are numerous psychological models---neodissociationism 
being just one---that posit a personality made up of multiple modules, 
all interacting (somewhat) under the guidance of an executive, Hilgard's 
hidden observer.  Unless and until we fully understand how 
consciousness is linked to personality, we probably shouldn't preclude 
multiple or simultaneous experience.







Re: Have all possible events occurred?

2005-06-26 Thread rmiller

At 11:07 PM 6/26/2005, Stathis Papaioannou wrote:

R. Miller writes:


Stathis Papaioannou writes:  Of course you are right: there is no way to
distinguish the original from the copy, given that the copying process works
as intended. And if you believe that everything possible exists, then there
will always be at least one version of you who will definitely experience
whatever outcome you are leaving to chance.  Probability is just a first
person experience of a universe which is in fact completely deterministic,
because we cannot access the parallel worlds where our copies live, and
because even if we could, we can only experience being one person at a time.
RM Comments: (1) I'll have to disagree with Stathis' (apparent) statement 
that probability is just a first person experience of a universe.
No proper foundation.  (2) Additionally, Stathis assumes that we cannot 
access the parallel worlds where our copies live.  Since no one
can even define consciousness, or isolate precisely where memory is 
located (or even what it is), there is no way we can preclude simultaneous
experience.  The best we can say is, we simply don't know.  And, (3), 
for the same reasons, we cannot say that we experience being one person
at a time.  There are numerous psychological models---neodissociationism 
being just one---that posit a personality made up of multiple modules, 
all interacting (somewhat) under the guidance of an executive, Hilgard's 
hidden observer.  Unless and until we fully understand how 
consciousness is linked to personality, we probably shouldn't preclude 
multiple or simultaneous experience.


1. I'm not saying that definitely there are all these other universes out 
there, but if there are, then like the copying experiments, it will seem 
probabilistic from a first person perspective because you don't know which 
copy you are going to be. It *does* look probabilistic, doesn't it? When 
you toss a coin, you only see one result. This could be explained equally 
well by saying there is only one universe, or multiple universes which do 
not interact at the level of people and coins.



RM: Okay. I see what you mean.  Thanks for the clarification.


2.  3. I can only experience being one person at a time. At least, it 
seems that way: when I toss a coin, I have never observed both heads and 
tails simultaneously. This tells me there is only one of me, or if there 
are many versions of me, I can't experience what the other versions are 
experiencing. Maybe under very unusual circumstances someone can peer into 
one or more of the parallel universes, but it has never happened to me!


Only if you assume personality is defined (remains cohesive?) as a function 
of the input amplitude---which seems to be a limited definition that 
doesn't take such things as sensory deprivation (float tanks, ganzfeld 
stimulation, sleep) into account.  Shut down the outside stimulus and we 
dream, but the personality--or the group of modules that represent the 
personality cluster--seems to be the same throughout.  As for the coin 
flip---there's no reason to suggest that a single outcome has any impact on 
our sense of self--it may be that we react simply because a single 
outcome is considered normal and expected.  On a larger scale, we 
experience events that are often contradictory and we tend to accommodate 
as well as any video gamer might---with no loss of self.  At worse, it 
comes down to the old joke:

Q. Can you make up your mind?
A. Well, yes and no.

RM





New Scientist

2005-06-24 Thread rmiller

All,
New Scientist has a very interesting article this week about free will, 
reality and entanglement.  Worth a look.  Additionally, for the trivia fans 
among you, it seems one of the researchers quoted has clocked similarity 
effects associated with entanglement at something like (minimum) 10,000 x 
the speed of light.


R.Miller




Re: another puzzzle

2005-06-24 Thread rmiller



Jesse wrote


In reality the molecules in your brain are constantly being recycled--if 
you believe that the changes that make up memories happen at the synapses, 
the article at http://www.sci-con.org/articles/20040601.html suggests all 
the molecules at the synapses are replaced in only 24 hours or so, and 
also that the entire brain is probably replaced every other month or so. 
So do you think the Eric Cavalcanti of six months ago is dead, and that 
your memories of having been him are false?


Jesse



All,
Jesse, IMHO, has pointed out the elephant in the room.   Is Sheldrake right 
about morphic fields guiding our path through the world-line? Or is our 
concept of reality out of whack?  While I respect Sheldrake, for pointing 
out some obvious quirks in real world perceptions, I think the concept of 
morphic field is merely descriptive rather than explanatory.   But if 
he's right, is anyone willing to blurt out for the record that 
consciousness may have its own pilot wave?


R Miller







RE: singular versus plural

2005-06-24 Thread rmiller

At 06:44 AM 6/24/2005, Stathis Papaioannou wrote:

(snip)
 So although it's not impossible that minds can somehow act as a group, 
that is something in need of *real* experimental evidence. Stacking a 
controversial theory on a weird idea balancing on an impossible situation 
is asking for trouble!


--Stathis Papaioannou


Actually, the experiments of Schmidt et al, and the evidence cited in 
Wisdom of Crowds suggest a QM model.   Of course, anyone can--and 
will!--deride an experiment as not being real---after all, that's how the 
science game is played, ;-), but prediction is one of the gold standards 
for an experimental model, and Schmidt's work, though not explanatory 
(neither is QM) it *is* predictive.  And if there's a difference between 
singular experience and plural experiences, then maybe it may be worthwhile 
to apply it to the thought problems here.


RM



_
Single? Start dating at Lavalife. Try our 7 day FREE trial! 
http://lavalife9.ninemsn.com.au/clickthru/clickthru.act?context=an99locale=en_AUa=19179







Re: death

2005-06-18 Thread rmiller

At 10:55 AM 6/18/2005, Stathis Papaioannou wrote:



(snip)
The above mechanism would still work even if, as in my thought experiment, 
there were 10^100 exact copies running in lockstep and all but one died. 
Each one of the 10^100-1 copies would experience continuity of 
consciousness through the remaining copy, so none would really die.


RM: None would really die only if the behavioral configurations were 
uniform and equal (thus equivalent) *and* only if their environment was in 
an equivalent state.  However, that is not the case here.  The environment 
and behavioral configurations of those who died are not commensurate with 
the one who lived. No equivalence means differing results---and differing 
paths.  Let's look at it this way: take two boxes, perfectly equivalent in 
every way and place inside each two similar marbles.  Assume that both 
systems are equivalent configurations and are, in effect, copies of one 
another.  When you remove one marble from its box, the other marble doesn't 
follow suit---it stays put.








copy method important?

2005-06-18 Thread rmiller

All,
Though we're not discussing entanglement per se, some of these examples 
surely meet the criteria.  So, my thought question for the day: is the 
method of copying important?
Example #1: we start with a single marble, A.  Then, we magically 
create a copy, marble B--perfectly like marble B in every way. . .that is, 
the atoms are configured similarly, the interaction environment is the 
same--and they are indistinguishable from one another.
Example #2: we start with a single marble A.  Then, instead of 
magically creating a copy, we search the universe, Tegmarkian-style, and 
locate a second marble, B that is perfectly equivalent to our original 
marble A.  All tests both magically avoid QM decoherence problems and show 
that our newfound marble is, in fact, indistinguishable in every way from 
our original.
Here's the question:  Are the properties of the *relationship* 
between Marbles A and B in Example #1 perfectly equivalent to those in 
Example #2?
If the criteria involves simply analysis of configurations at a 
precise point in time, it would seem the answer must be yes.  On the 
other hand, if the method by which the marbles were created is crucial to 
the present configuration, then the answer would be no.


R. Miller









Re: another puzzzle

2005-06-16 Thread rmiller

At 09:12 AM 6/16/2005, Stathis Papaioannou wrote:

You find yourself in a locked room with no windows, and no memory of how 
you got there. The room is sparsely furnished: a chair, a desk, pen and 
paper, and in one corner a light.


RM: You've just described me at work in my office.

The light is currently red, but in the time you have been in the room you 
have observed that it alternates between red and green every 10 minutes. 
Other than the coloured light, nothing in the room seems to change.


RM. . .at my annual New Years' party.

Opening one of the desk drawers, you find a piece of paper with incredibly 
neat handwriting. It turns out to be a letter from God, revealing that you 
have been placed in the room as part of a philosophical experiment. Every 
10 minutes, the system alternates between two states. One state consists 
of you alone in your room. The other state consists of 10^100 exact copies 
of you, their minds perfectly synchronised with your mind, each copy 
isolated from all the others in a room just like yours. Whenever the light 
changes colour, it means that God is either instantaneously creating 
(10^100 - 1) copies, or instantaneously destroying all but one randomly 
chosen copy.


Your task is to guess which colour of the light corresponds with which 
state and write it down. Then God will send you home.


Having absorbed this information, you reason as follows. Suppose that 
right now you are one of the copies sampled randomly from all the copies 
that you could possibly be. If you guess that you are one of the 10^100 
group, you will be right with probability (10^100)/(10^100+1) (which your 
calculator tells you equals one). If you guess that you are the sole copy, 
you will be right with probability 1/(10^100+1) (which your calculator 
tells you equals zero). Therefore, you would be foolish indeed if you 
don't guess that you in the 10^100 group. And since the light right now is 
red, red must correspond with the 10^100 copy state and green with the 
single copy state.


But just as you are about to write down your conclusion, the light changes 
to green...


What's wrong with the reasoning here?


RM: Nothing wrong with the premise or the reasoning IMHO.  Happens to me 
every day---while sitting at a traffic light alone in my car(s) all 10^100 
of me come up with a great idea---I try to write it down and the light 
changes to green.






--Stathis Papaioannou

_
REALESTATE: biggest buy/rent/share listings
http://ninemsn.realestate.com.au






Re: another puzzzle

2005-06-16 Thread rmiller

At 09:12 AM 6/16/2005, Stathis Papaioannou wrote:

You find yourself in a locked room with no windows, and no memory of how 
you got there. \


(snip)

 The other state consists of 10^100 exact copies of you, their minds 
perfectly synchronised with your mind, each copy isolated from all the 
others in a room just like yours. Whenever the light changes colour, it 
means that God is either instantaneously creating (10^100 - 1) copies, or 
instantaneously destroying all but one randomly chosen copy.


RM's two cents worth: If all the 10^100 copies have exactly the same 
sensory input, exactly the same past, exactly the same environment and have 
exactly the same behavior systems, then there would be no overall increase 
in complexity (no additional links between nodes), but there would overall 
be a multiplication of intensity (10^100).  Would this result in a more 
clarified perception during the time period when one is represented 
(magnified?) by 10^100?  It's an open switch (i.e. who knows???)  However, 
the increase in intensity would *not* result in greater perception; that 
would involve linking additional nodes---i.e. getting more neurons or 
elements of the behavior system involved---and the number of links over the 
10^100 copies would remain static.


If Stathis includes the possibility of chaos into the system at the node 
level (corresponding to random fluctuations among interactions at the node 
level) then these differences among the 10^100 copies would amount to 
10^100 specific layers of the individual all linked by the equivalence of 
the similarly-configured behavior systems.  If one could see this from the 
perspective of (say) Hilbert space, it may look like a deck of perfectly 
similar individuals with minor variations or fuzziness.  These links as 
well as the fuzziness over many worlds may be what corresponds to 
consciousness.   





RE: more torture

2005-06-15 Thread rmiller

At 11:03 AM 6/15/2005, Jesse Mazer wrote:

I wrote:


No, I don't think they don't all have to have the same volume,


Whoops, weird double negative here...that should read I don't think they 
all have to have the same volume.


Jesse



must have
should have
are required to have



RM 





Re: more torture

2005-06-13 Thread rmiller

At 06:00 AM 6/13/2005, Stathis Papaioannou wrote:
I have been arguing in recent posts that the absolute measure of an 
observer moment (or observer, if you prefer) makes no possible difference 
at the first person level. A counterargument has been that, even if an 
observer cannot know how many instantiations of him are being run, it is 
still important in principle to take the absolute measure into account, 
for example when considering the total amount of suffering in the world. 
The following thought experiment shows how, counterintuitively, sticking 
to this principle may actually be doing the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all being 
run in lockstep with each other, as would occur if 10 identical computers 
were running 10 identical sentient programs. Assume that the torture is so 
bad that death is preferable, and so bad that escaping it with your life 
is only marginally preferable to escaping it by dying (eg., given the 
option of a 50% chance of dying or a 49% chance of escaping the torture 
and living, you would take the 50%). The torture will continue for a year, 
but you are allowed one of 3 choices as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies will 
increase to 10^100. Once the minute is up, the number of copies will be 
reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and continue 
for the other 2.


Which would you choose? To me, it seems clear that there is an 80% chance 
of escaping the torture if you pick (c), while with (a) it is certain that 
the torture will continue, and with (b) it is certain that the torture 
will continue with only one minute of respite.

RM writes. . .
Here is my criteria:  There are those who suggest that there is only one 
electron in the universe, but that it travels forward and backward in time, 
thus making multiple copies of itself.  If the individual percipient would 
eventually have to experience the pain and suffering of all whom he had 
affected--or caused to experience pain and suffering, then the most 
selfish, altruistic *and* sensible choice would be (c).


Rich Miller




RE: Many Pasts? Not according to QM...

2005-06-11 Thread rmiller

At 12:43 PM 6/11/2005, Hal Finney wrote:

Here's a little tongue-in-cheek rant...
(snip)

Yet how many philosophers are willing to seriously consider abandoning
this arbitrary conditioning in deciding what is right and wrong?  How many
of us here are willing to take the logical path to its ultimate conclusion
when considering how observer-moments fit together?  It goes against the
deepest instincts which have been burned into us since the origin of life.

I would not be quick disparage evolutionarily based reasoning.  We are
creatures of evolution, and it is almost impossible to escape the bounds
that it has put around our ways of thought.

Hal Finney


If you consider the matrix of all observer-moments (possibly under the eye 
of something like Hilgard's Hidden Observer), then it would be impossible 
to cut the links any more than trying to wear a shirt with all the threads 
cut.  The shirt would disintegrate off your back (like my Izod's did 
recently.)  Or possibly, it could be like a Photoshop image with a 100 
subtle layers---remove the layers and the entire image changes 
slightly---but after awhile you're down to one monochromatic layer.


Rich Miller






Re: Another tedious hypothetical

2005-06-08 Thread rmiller

At 11:08 PM 6/8/2005, Jesse Mazer wrote:
(snip)

You should instead calculate the probability that a story would contain 
*any* combination of meaningful words associated with the Manhattan 
project. This is exactly analogous to the fact that in my example, you 
should have been calculating the probability that *any* combination of 
words from the list of 100 would appear in a book title, not the 
probability that the particular word combination sun, also, and 
rises would appear.


RM: Are you suggesting that a fair analysis would be to wait until Google 
Print has the requisite number of books available, download the text, then 
sic Mathematica onto them to look for word associations linked with a 
target?   What limits would you place on this (if any?)  Or would this be a 
useless (though certainly do-able) exercise?




(snip)





. . . Would it be fair to test for ESP. . .


We're not testing for ESP--only out-of-causal-order gestalts in popular 
literature that are associated with similar gestalts in literature (or 
national) events taking place at some future time.   There might be a 
fine--though humdrum and unpredictable---explanation for this sort of 
business.  Or it might be explained by some of the more offbeat analytical 
procedures---say, involving exponential or Poisson probabilities 
as  applied to delayed choice events.  Who knows?  While I wouldn't rule it 
out, I personally don't think the eventual answer--if there is one---will 
involve anything as humdrum as ESP.  And if this sort of thing is to be 
expected in the course of publishing events, then there should be a 
mathematical formula that can predict it, given the input variables (which 
is why I think exponential or Poisson might be involved.)



Again, my concern is that scientists are too willing to prejudge 
something before diving into it.


OK, but this is a tangent that has nothing to do with the issue I raised 
in my posts about the wrongness of selecting the target (whose probability 
of guessing you want to calculate) using hindsight knowledge of what was 
actually guessed.


As a former fed, I would wholeheartedly disagree.  There is a grand 
tradition of avoiding analysis by whatever means are available, including 
hindsight knowledge invalidating the correlation.  In other words, you 
shouldn't ever mine for data.  Thankfully, that admonition is routinely 
ignored by many biostatisticians.


 If you don't want to discuss this specific issue then say so--I am not 
really interested in discussing the larger issue of what the correct 
way to calculate the probability of the Heinlein coincidences would be, I 
only wanted to talk about this specific way in which *your* method is 
obviously wrong.


Thank you. (Finally!!!)   Whew!   That sentence has validated the entire 
horrid exercise.  May I quote you???


Like I said before, any method that could be invented by someone who 
didn't know in advance about Heinlein's story would avoid this particular 
mistake. . .


. . .another money quote. . .


*although it might suffer from other flaws*.



This one too!!!

Regards and Thanks Again!

Rich M.





The tedious hypothesis and the reason for it. . .

2005-06-07 Thread rmiller

All,
My tedious complaint about scientists prejudging issues prior to analysis 
(the facts don't warrant. . .etc) extends beyond the superficially weird 
(Heinlein's story) to the comparatively normal.  While I'm not suggesting 
anyone who does this routinely is anything other than merely disinterested 
in the subject (a perfectly good reason to avoid time-consuming research), 
the inescapable fact is that this sort of technique has long been used as a 
means of avoiding good scientific work.


Example #1. Here is an excerpt from correspondence by Dr. Paul Thomkins, 
director of the FRC in his letter to the Atomic Energy Commission dated 
September 25, 1952: The basic approach to the report would be to start 
with a simple, straightforward statement of conclusions.  We would then 
identify the major questions that could be expected to be asked in 
connection with these conclusions.  It would then be a straightforward 
matter to select the key scientific consultants whose opinions should be 
sought in order to substantiate the validity of those conclusions or 
recommended appropriate modifications.


Example #2:  Dr. Dade W. Moeller, in his 1971 speech as he accepted the 
presidency of the Health Physics Society admonished the members: Let's all 
put our mouth where our money is.


Source:  Overhead projector slide by Dr. Karl Morgan, speaking at a 
conference on radiation at the University of Utah circa early 1980s. Title: 
Fundamental Reasons Why Standards-Setting Bodies and Health Physics Do Not 
Serve Persons with Radiation Injury.


Prejudging difficult evidence is a grand tradition that is not without it's 
occasional monetary perks. . .especially in governmental affairs.


RM




Re: Another tedious hypothetical

2005-06-07 Thread rmiller

At 02:45 PM 6/7/2005, Jesse Mazer wrote:
(snip)


Of course in this example Feynman did not anticipate in advance what 
licence plate he'd see, but the kind of hindsight bias you are engaging 
in can be shown with another example. Suppose you pick 100 random words 
out of a dictionary, and then notice that the list contains the words 
sun, also, and rises...as it so happens, that particular 3-word 
gestalt is also part of the title of a famous book, the sun also rises 
by Hemingway. Is this evidence that Hemingway was able to anticipate the 
results of your word-selection through ESP? Would it be fair to test for 
ESP by calculating the probability that someone would title a book with 
the exact 3-word gestalt sun, also, rises? No, because this would be 
tailoring the choice of gestalt to Hemingway's book in order to make it 
seem more unlikely, in fact there are 970,200 possible 3-word gestalts you 
could pick out of a list of 100 possible words, so the probability that a 
book published earlier would contain *any* of these gestalts is a lot 
higher than the probability it would contain the precise gestalt sun, 
also, rises. Selecting a precise target gestalt on the basis of the fact 
that you already know there's a book/story containing that gestalt is an 
example of hindsight bias--in the Heinlein example, you wouldn't have 
chosen the precise gestalt of Szilard/lens/beryllium/uranium/bomb from a 
long list of words associated with the Manhattan Project if you didn't 
already know about Heinlein's story.


RM wrote:

In two words: Conclusions first.
Can you really offer no scientific procedure to evaluate Heinlein's 
story?  At the cookie jar level, can you at least grudgingly admit that the 
word Szilard sure looks like Silard?  Sounds like it too.  Or is that a 
coincidence as well?  What are the odds.  Should be calculable--how many 
stories written in 1939 include the names of Los Alamos scientists in 
conjunction with the words bomb , uranium. . .


You're shaking your head.  This, I assume is already a done deal, for you.

And that, in my view, is the heart of the problem.  Rather than swallow 
hard and look at this in a non-biased fashion, you seem to be glued to the 
proposition that (1) it's intractable or (2) it's not worth analyzing 
because the answer is obvious.


If your answer is (1), then fine.  Let others worry about it.  But if your 
answer is (2), then congratulations---you've likely committed a Type II 
error.   In all of your posts, you seem to present reasons why the Heinlein 
story should not be investigated because (I'm paraphrasing, of course) it's 
obviously not worthy of investigation.  You exclude ALL the 
evidence---even the Bonferroni doesn't do that.  Logically, if you exclude 
all the evidence, then the probability that you might miss something go to. 
. .1.   One hundred percent.


When one chooses to use, say Spearman Correlation Coefficients to evaluate 
multiple pairs, the usual protocol involves using the Bonferroni 
correction--in which the alpha (often at 0.05) is divided by some multiple 
of the number of pairs evaluated--usually simply the number of pairs.  A 
thousand pairs?  then, the alpha should be divided by a thousand and the 
resultant p value accepted as similar to a single p value of 0.05.  Problem 
is, this sort of trick will cost you statistical power.  You may not decide 
something is significant when it is not, but you may also throw out a value 
that truly is important.  As the type I error risk goes down, the Type II 
error risk goes up.  (Reducing alpha increases beta (the probability of 
making a Type II error.) There are reputable statisticians who suggest not 
using the Bonferroni at all.  In my work, I evaluate cancer rates against 
radioisotopes in nuclear fallout---but I require a very high Z score for 
significance.


I've yet to see a good protocol defined here to evaluate the Heinlein 
story, most prefer to fall back onto the soft couch of bias and 
prejudgment.  But in doing so, your beta goes out the roof--and you 
guarantee yourself that you'll never recognize *anything* as 
significant.  It would seem that it would be far easier and more 
scientifically sound to just admit that you are aware of no tools that can 
properly evaluate it.


PS:  Note I haven't mentioned anything about proof or causation---merely 
the ability to apply the scientific method--properly free of bias---to a 
set of circumstances.   So far (as with the Thompkins quote)--it looks like 
conclusions first, justification later.


Hope your drug company doesn't use the same protocol.  Because 
*that*  wouldn't be right, would it?;-)



RM








RE: Hypotheses

2005-06-06 Thread rmiller





At 12:50 AM 6/6/2005, you wrote:
A couple of hours ago, I was speaking to a young man who informed me that 
he can predict the future: he has visions or dreams, and they turn out to 
be true. I asked him for an example of this ability. He thought for a 
moment, explaining that there were really far too many examples to choose 
from, then settled on this one. During the recent war in Iraq, he had a 
dream about a buried train containing weapons. Two days later - you 
guessed it - he saw on the news that a buried train containing WMD's was 
discovered in Iraq! And if that doesn't convince you that I'm psychic, 
my patient said (for that is what he was), I don't know what will!


My question to the list: should I have stopped this man's antipsychotic 
medication?


--Stathis Papaioannou

No.  Unless it was Disulfiram elixer. . .(sorry, couldn't resist.)

But were the antipsychotic meds *causing* the dreams or was it due to an 
insufficiently low dose?  In the early 1970s ketamine Hcl was the 
anesthetic of choice on kids for minor surgical procedures---it was good 
for 25 minutes, it preserved the laryngeal reflex--and you could always 
tell when they were coming out---they would elicit this gripping 
motion.  But in some cases it gave the kids OBEs.   Typical doc response: 
Yipes!  Let's use something else!
Now, they use ketamine ONLY on Rover and Fluffy.  Gives 'em big pupils for 
a couple of hours, and you don't really *care* what sensitive places they 
visited while they were under.


As for precognition. . while doing research for a book I authored in the 
mid-eighties, I first tracked nuclear clouds across the US--then went to 
the libraries in the paths of the debris clouds to see what was taking 
place as the radioactive material passed overhead.  There were some strange 
coincidences, but that's probably all they were.  However, there was one 
thing that impressed me---those in the creative professions occasionally 
conjure up artwork that, in retrospect, appears to be a precognitive 
shadow of an event taking place days or weeks later.  The day before the 
worlds' first nuclear test, the NY Times had a couple of sly articles in 
the editorial section that alluded to the nuke,test.  One article, for 
example, was titled, A Gadget Long Needed.  There was a book review about 
three stories:  Two were titled, A Fiery Lake and Solano. Now, of 
course, the NYT also had a reporter present at Los Alamos, so they probably 
wanted to scoop everyone else.   Precognition score:  probably zero.   But 
then there was the weird little cartoon called Flyin' Jenny which was 
found in the secondary papers---in places like Mason City, Iowa and 
Houston, TX.  on July 15, 1945 the main character (Flyin' Jenny)  picked up 
her microphone and said: Is there fire at the end of that gadget?   To 
me, that's pushing the coincidence envelope.



RM  





Re: Another tedious hypothetical

2005-06-06 Thread rmiller

At 03:01 PM 6/6/2005, Pete Carlton wrote:
(snip)

The point is, there are enough stories published in any year that it would 
be a trivial matter to find a few superficial resemblances between any 
event and a story that came before it.
my second comment. . .if it's such a trivial matter, then perhaps you can 
find and produce another publication that includes the gestalt found in 
Heinlein's story.   Anything before 1945 that is.   You may want to go to 
Google Print---that should be helpful.


RM





Re: Another tedious hypothetical

2005-06-06 Thread rmiller

At 03:58 PM 6/6/2005, you wrote:

rmiller wrote:


At 03:01 PM 6/6/2005, Pete Carlton wrote:


(snip)

The point is, there are enough stories published in any year that it 
would be a trivial matter to find a few superficial resemblances between 
any event and a story that came before it.


Let's look a little closer at the story in terms of gestalts.

On one side we have published author Robert Heinlein writing a story in 
1939 about a guy named Silard who works with a uranium bomb, a beryllium 
target and a fellow named lenz.  We'll leave Korzybski out of this one 
(I suspect Heinlein borrowed the name from A. Korzybski, a sematicist of 
some renown back in the 1930s.)  To me the interesting nodes involve the 
words Silard lenz beryllium, uranium and bomb. So let's agree 
that here is a story that includes a gestalt of the words Silard, lenz, 
beryllium, uranium and bomb.


But you can't use that particular gestalt when talking about the 
probability that a coincidence like this would occur, because you never 
would have predicted that precise gestalt in advance even if you were 
specifically looking for stories that anticipated aspects of the Manhatten 
project.


Where on earth did *that* gestalt rule come from??? ;-)


 It would make more sense to look at the probability of a story that 
includes *any* combination of words that somehow anticipate aspects of 
the Manhatten project. Let's say there were about 10^10 possible such 
gestalts we could come up with, and if you scanned trillions of parallel 
universes you'd see the proportion of universes where a story echoed at 
least one such gestalt was fairly high--1 in 15, say.




This means that in 1 in 15 universes, there will be a person like you who 
notices this anticipation and, if he uses your method of only estimating 
the probability of that *particular* gestalt, will say there's only a 1 
in 10^9 probability that something like this could have happened by 
chance! Obviously something is wrong with any logic that leads you to see 
a 1 in 10^9 probability coincidence happening in 1 in 15 possible 
universes, and in this hypothetical example it's clear the problem is that 
these parallel coincidence-spotters are using too narrow a notion of 
something like this, one which is too much biased by hindsight knowledge 
of what actually happened in their universe, rather than something they 
plausibly might have specifically thought to look for before they actually 
knew about the existence of such a story.


Sounds like you're invoking rules of causation here--post hoc rather than 
ad hoc, hindsight bias, etc.  Certainly I am not suggesting Heinlein's 
story caused Szilard to be hired (interesting thought, though!)  And 
unless I want to invoke Cramer's transactional approach, I would not 
really want to think that the Manhattan Project caused Heinlein to write 
his story.  That would require reverse causation, and we know that doesn't 
happen.  This is very simple: we have instances in which Heinlein includes 
key words (definable as being essential to the story---without them, 
different story) that form a gestalt of. . .well, key words.  These words 
are equivalent to those describing the Manhattan Project and not many 
other things.  To show that there are not many other things these key word 
gestalts describe, one can wait a year and use Google Print to call up all 
the books and stories associated with these key words.  Then we will have 
a probability to work with.  Since the gestalts are separated by four 
years (or thereabouts) then we shouldn't have to invoke causation.



How is this potentially valuable?  Suppose we use Google Print again and 
find all the instances of key word gestalts in sci fi matching key word 
gestalts in scientific non-fiction---at a later date.  What if we found 
that there seems to be a four-year gap between the two--no more, no 
less.   That piece of information may be valuable later on down the road in 
trying to piece the puzzle together.


But just to say that we shouldn't investigate it because it's all a 
coincidence, or that the hypothesis was improperly framed, or that it 
violates some of Hill's Rules of Causation--- is just reinforcing the 
notion that math and logic are not up to the task of investigating some 
things in the real world.


RM









Re: Another tedious hypothetical

2005-06-06 Thread rmiller


At 06:56 PM 6/6/2005, you wrote:
Jesse has it right on here, and
one can go even further in this vein. You are impressed by the
relationship between one particular story and one particular event - but
you hand-picked both the story and the event for discussion here
because of their superficial similarities. You challenged me to
find another example of a story with the same resemblances that
the Heinlein story has to the atomic bomb project. But resemblances
between any written story and any similar event that
happens after the story's publication would be in the same
class.
I'm not saying that Heinlein was plugged into anything particular.
As a sociologist, my interest is the inability of some branches of
science to address many common-sense events. Any scientist worth
his degree can conjure up logic in order to drop a complicated issue and
move on to something else: improperly framed question, no prior
data, no model, post hoc cherry-picking, etc and etc. I once had a
phone chat with Ray Hyams about this---his response was
telling---basically skeptics don't investigate---they debunk. That
isn't the scientific method; that's a belief system. That,
and economical considerations, of course, is why it took 10 years before
medicine figured out the importance of helicobacter pylori.
My own working definition of a science skeptic is the last guy on the
cul-de-sac who hasn't been told (by everyone else) how to find his water
lines using two clotheshangers. The reason of course, is that
everyone knows it wouldn't work for him anyway. ;-)

I'm not saying that the
resemblances between the story and the bomb are trivial - they do make an
impression. It also makes an impression when someone dreams of a
relative dying and the next day they receive news that that relative did
in fact die that night; or when you're in a foreign city and you look up
the number of the taxi company and it turns out to be your home phone
number, or when exactly 100 years separate (1) the election to Congress
(2) the election to the presidency (3) the birth of the assassins of and
(4) the birth of the successors of John F. Kennedy and Abraham
Lincoln.
If the Heinlein story failed to impress, then may I ask what went missing
in it that--had it been there--would have suggested further study?
Twenty key words and phrases including Oppenheimer,
Trinity plutonium Neddermeyer
mushroom cloud Teller Light and
shake? Or would that again be just classified as
a rather unusual coincidence? I hear a lot of qualifiers
(such as the one below) but nothing substantial regarding your
criteria. It seems all very vague--except of course, for the
conclusion. If you have a criteria or model for evaluating
some of these events (such as Heinleins example) I'd like to hear
it. Then, as good scientists, we can begin to evaluate how
appropriate it may be for the examination of these unusual events.
Until we have that protocol defined, I'm sorry, you're just expressing a
belief (that nothing that can't be explained by a model is exceptional or
even should be evaluated.)
These coincidences all make an
impression on one. But nothing special needs to be invoked to
explain the occurrence of these events -- what needs to be
explained is the facet of human psychology that makes people think
something strange is going on when in fact nothing is.
Yet, without knowing the facts you immediately assume the facts
when in fact nothing is. It's a common position
taken by the lazy scientist---and it doesn't have to do with strange
things, either. It's why the EPA never bothered to determine the
density of the WTC surge cloud. Nothing to worry about, because,
well, *in fact* there is nothing to worry about. The citizens of
New York *do* appreciate that position. (hey, Pete, you're a
fed---why haven't they come up with the density?)
 Many people have taken
stabs at it, and evolutionary explanations seem to work well --
seriously, you should get the Dawkins book and read the chapter to see
where we're coming from; Carl Sagan also addressed this issue very
well.
And of course, I encourage you to consider coming up with an appropriate
protocol that doesn't include prejudging the data, or assuming facts not
in evidence---and tell us what the density of that surge cloud was in
milligrams per cubic meter. Is that in a book somewhere also?
;-)

--Also, you still have not
explained how you get 1 in 10e-9.
I used it as an example of a p value that is dreadfully easy to obtain
when applying standard probabilities to any of these events. My
concern is that for many scientists, 1x10^-9, though ridiculously
small---is, for some things, still not small enough. Which is why
scientists have willfully ceded important areas of research to the likes
of the Midnight Examiner, the Star, The Washington Times and Fox
News.
Cheers,
RM
Pete, if you need some numbers to call at the EPA's RTP facility, I'll be
glad to give em to you.



Another Tedious Hypothetical

2005-06-05 Thread rmiller

All,
Another hypothetical.  In 1939, let's say, a writer comes up with a sci-fi 
story, which is published the next year.  It involves (let's say) a uranium 
bomb and a beryllium target in the Arizona desert that might blow up and 
cause problems for everyone.  His main character is a fellow he decides to 
name Silard.  Two other characters he names Korzybski and Lenz.  Two 
cities are named in the story: Manhattan and Chicago.   Along about the 
same time, in 1939 an out-of-work scientist named Leo Szilard is crossing a 
street in London (no, he doesn't know the sci fi writer.)  Four years later 
Leo Szilard will be working with a guy named George Kistiakowski---whose 
job it is to fashion a lens configuration for the explosives surrounding a 
nuclear core for the first atomic bomb---code named, the Manhattan 
Project.  Some of the other scientists, Enrico Fermi, for example, are from 
Chicago (where the first man-made nuclear pile was constructed---under the 
ampitheater.)


Now, pick one:
1. All a Big Coincidence Proving Nothing (ABCPN)
2. The writer obviously was privy to state secrets and should have been 
arrested.

3. Suggests precognition of a very strange and weird sort.
4. Might fit a QM many worlds model and should be investigated further.
5. I have no clue how to even address something like this.

Any takers?

RM




Another tedious hypothetical

2005-06-05 Thread rmiller

All,
Another hypothetical.  In 1939, let's say, a writer comes up with a sci-fi 
story, which is published the next year.  It involves (let's say) a uranium 
bomb and a beryllium target in the Arizona desert that might blow up and 
cause problems for everyone.  His main character is a fellow he decides to 
name Silard.  Two other characters he names Korzybski and Lenz.  Two 
cities are named in the story: Manhattan and Chicago.   Along about the 
same time, in 1939 an out-of-work scientist named Leo Szilard is crossing a 
street in London (no, he doesn't know the sci fi writer.)  Four years later 
Leo Szilard will be working with a guy named George Kistiakowski---whose 
job it is to fashion a lens configuration for the explosives surrounding a 
nuclear core for the first atomic bomb---code named, the Manhattan 
Project.  Some of the other scientists, Enrico Fermi, for example, are from 
Chicago (where the first man-made nuclear pile was constructed---under the 
ampitheater.)


Now, pick one:
1. All a Big Coincidence Proving Nothing (ABCPN)
2. The writer obviously was privy to state secrets and should have been 
arrested.

3. Suggests precognition of a very strange and weird sort.
4. Might fit a QM many worlds model and should be investigated further.
5. I have no clue how to even address something like this.

Any takers?

RM




Re: Another Tedious Hypothetical

2005-06-05 Thread rmiller

At 12:31 PM 6/5/2005, rmiller wrote:
A correction---the first nuclear test, was named, of course, Trinity, not 
The Manhattan Project.  And the core of the device, which Oppenheimer 
called the gadget was about the size of a grapefruit.


RM




RE: Another Tedious Hypothetical

2005-06-05 Thread rmiller

At 09:01 PM 6/5/2005, Stathis Papaioannou wrote:

In order: 2,1,5,3,4.

--Stathis Papaioannou

Thanks to Lee and Stathis--
Anyone else?

R. 





Hypotheses

2005-06-05 Thread rmiller
Re the hypotheses---Social scientists, astronomers and CSI agents are the 
only ones I'm aware of who routinely evaluate events after the fact.  The 
best, IMHO, such as the historian Toynbee, fit facts to a model. At it's 
worst, the model becomes the event and before long we're deep in 
reification (the Achilles heel of Structural Functionalism) or that 
favorite of lazy reporters, *abduction* (this is our favorite explanation, 
so that must be what happened.)  Mathematicians, philosophers and those 
with a good math and logic background prefer their battles timeless and 
relatively absent of worldly references.  Great theater, but as Scott 
Berkun noted in his excellent 
articlehttp://www.scottberkun.com/essays/essay40.htm just because the 
logic holds together, doesn't mean it's true.  Or correct.  Or 
anything--other than consistent.


But logic is an inestimable tool if used to evaluate models such as those 
proposed, developed and ridden into the dirt by many prominent social 
scientists.  It is always refreshing to see a lumbering behemoth like 
structural functionalism (a sociological model) dismantled by a skilled 
logician who knows reification when he sees it (saw a little of that with 
Lee Corbins' excellent rant.)  But it would be even better to see these 
tools applied to truly strange events that take place in the real 
world---things that Sheldrake writes about, for example.   Things that 
*happen* to us all.


Unfortunately, that's not likely to happen.  It's the knee-jerk reaction of 
most mathematicians and logicians to deride real world events as 
coincidence, when in fact, they are comparing the event to mathematical 
certainty, and logical clarity.  They might say, Why evaluate Sheldrake's 
precognitive dogs in terms of a physics model, because Sheldrake's dogs 
are not really precognitive.  That protocol (if you can call it that) 
doesn't even rise to the level of *bad* abduction.   It's a protocol that 
closes doors rather than opens them, is not designed to divine new 
information, and is neither analytic *nor* synthetic.  Worst of all, it 
claims to be science when it fact, it is preordained belief.  In other 
words, it's okay to bend the rules and prejudge a variable as long as you 
first call it rubbish.


Slip-ups aside,  I would like to see a rigorous application of the powerful 
tools of philosophy, logic and mathematics applied to the study areas of 
social science, i.e. the real world.  Physicists are great at telling us 
why the rings of Saturn have braids, but terrible (or worse than that, 
dismissive) of events that occur involving consciousness. (Social 
scientists are no better---they fall back on things like structural 
functionalism).  I suggest its time for the social scientists to let the 
logicians and mathematicians have a look at the data, and it's time for the 
logicians and mathematicians to enter the real world and make an honest 
attempt at trying to explain some strange phenomena.


That asking too much?

RM




RE: Down with Scientism

2005-06-05 Thread rmiller

At 12:16 AM 6/6/2005, you wrote:
I sometimes get into arguments with anti-science associates, who are into 
wholism, mysticism, spiritualism and so forth. They think that scientists 
are an elite with their own brand of 'ism (scientism, perhaps), which is 
no more valid than these other 'isms. I point out to these people that if 
they have figured out they have to open their mouth in order to put food 
in it, turn a handle to open a door, vibrate their vocal cords to make a 
sound, then they have performed a scientific experiment and abstracted a 
theory from it. If science is an 'ism, it's the most basic one in the world.


Rant follows from RM:

I agree.  But even the best scientists won't take a look at the data 
unless it's properly ordered (an Excel or Statistica spreadsheet would be 
nice.)   AND there has to be a chunk of *serious* money 
attached.  Personally, I'd like to see some of the bright scientific 
lights (such as found in this group, IMHO) tackle the basic problems the 
professionals can't seem to find the time to address.  Did you know for 
example, that Homeland Security spent untold millions of dollars and two 
years trying to detect Marburg (and other) virus particles (0.9 u 
diameter) using only the great tools of C and S band radar? (5 and 10 cm 
wavelength respectively)?  Without promising any money, can anyone here 
see a very basic flaw in that design


As Stevie in Malcolm in the Middle might say. . .
Two. . .years?

RM

  





RE: Another tedious hypothetical

2005-06-05 Thread rmiller

At 03:40 PM 6/5/2005, you wrote:

RM writes

(snip)

 Now, pick one:
 1. All a Big Coincidence Proving Nothing (ABCPN)
 2. The writer obviously was privy to state secrets
 and should have been arrested.
 3. Suggests precognition of a very strange and weird sort.
 4. Might fit a QM many worlds model and should be investigated further.
 5. I have no clue how to even address something like this.

 Any takers?
LC:
I'll go for 1, all a big coincidence. Firstly, it should be taken
as the default hypothesis. Second, in my opinion no reliable evidence
has ever surfaced that points to precognition, or points to a science
theory that is an elaboration of QM/GR. In fact, numerous claims of
something new are regularly debunked by skeptics, and have picked up
the name (rightly, in my opinion) of pseudo-science.


RM:

Given a set of events that are impossible to reproduce (how can the writer 
re-create the basis for his story a second time?) we can only examine them 
after the fact in terms of probabilities.  Even if we didn't go to a 
phonebook and look up the relative number of Silards or Lenzes vs the 
more common names, it's fairly obvious that the probabilities of this being 
a chance occurrence are on the order of one in tens of millions.  Yet we 
write this kind of thing off as coincidence.  The example I gave, (of 
course) is a real story titled Blowups Happen written by a real sci fi 
author--Robert Heinlein.   Heinlein was asked about the coincidence, and he 
said he had no idea where he got the names or the idea.   The story itself 
was  *was* written in 1939---many years before the Manhattan District 
Project was even considered by anyone--and before Szilard began work on 
nukes and before Kistiakowski began work on his lenses.


Most who have written about this focus on the fact that the story is about 
a uranium bomb at a site in the Arizona desert.  But when one gets into 
the minutiae is where it gets truly weird.  Neither Heinlein in 1939-- nor 
most journalists who wrote about the coincidences since then--- were aware 
of the explosive lens issue, nor were they aware that most fission nukes 
have beryllium neutron reflectors.  I'll suspect Heinlein chose the name 
Korzybski from a semi-famous semanticist from the 1920s and 30s named 
Alfred Korzybski.  But to me, the other coincidences are just too weird to 
ignore.




LC writes:
In world war II, the FBI did question one man who published a story
involving atomic theory or atomic bombs that had some eerie similarities
to what was top secret. But they determined that it was just coincidence.
I'd be lying if I claimed to be unaffected by that report.


RM replies:
That would be the Clive Cartmill story Deadline which appeared in a 1944 
issue of Astounding magazine.   Actually, atomic bombs were accepted as a 
possibility since HG Wells' 1914 story The World Set Free.  INMO, the 
Cartmill story *is* coincidence.  The Heinlein story is *truly* weird.


RM





RE: Functionalism and People as Programs

2005-06-04 Thread rmiller

At 12:36 PM 6/4/2005, Lee Corbin wrote:

R. Miller writes

 Lee Corbin wrote:


 Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a
 million cells.  When that happens, you die.   I would suggest that is a 
bad

 metaphor.

Well, my numbers, above, are *entirely* different from yours. One in a million
cells is a *terrible* loss. But one atom?  There are 10^14 atoms per cell.
(And 10^14 cells in a typical human.)  I would stick with my numbers.
But in case you are somehow right, and that each cell would be wrecked
by the loss of a single atom, my point can be made by relaxing the
numbers:  replace what I've written by I'll be happy to teleport even
if 100 trillion atoms are destroyed: a whole cell, gone.


Lee,
As I indicated earlier, I was out to lunch on that one-in-a-million 
cells/atoms deal.  As I understand it, one cell killed out of a million is 
lethal, however.



R.





Hypothetical shaman's dilemma

2005-06-04 Thread rmiller
Here's a hypothetical situation.  Your plane goes down in the wilds and 
you're rescued by a tribe indigenous to the area.  You're wearing the 
latest clothes from the GAP, so the tribe elders decide you're a candidate 
for shaman apprentice--a position that comes with nice lodging and pays 
well indeed.  The chief shaman likes you and decides to let you in on a 
secret: shamans exploit a brand of multiverse QM theory in that they do 
their magic by scanning various future branches of the tribe's world line 
in order to predict what will take place (rain, good weather, winning 
tickets at the lottery, etc.)  Getting the branch right is a bit difficult, 
but with practice one can get within a few worlds of the path on the world 
line the tribe eventually takes.  You discover that each branch is very 
deterministic and causal-based---and once on a path, one thing reliably 
leads to another.  You discover that your job as shaman is to keep 
everyone's attention, and once you've done that, to direct them down a 
reliable path.  You decide it's not too different from the corporate world, 
so you're eager to have a go at it.


You learn quickly and after a month or so, you can generally intuit (no pun 
intended) what the possibilities (paths) are for the tribe, and you're able 
to steer them as a unit down that path.  With that, the shaman retires and 
you take his place.


Then, the chief takes ill.  You saw it coming but you thought it would be 
on another path---but you were wrong.  Now you look ahead and all the paths 
forward are deterministic and end with the death of the chief.   But, the 
tribe is relying on you to make the chief well.   So you go to the retired 
shaman and ask him what to do.  He replies that you *can't* make the chief 
well---especially if all the paths forward are deterministic and all end in 
death for the chief.  But as a shaman there are things you can do that will 
shake things up that will result in placing the entire tribe---and you 
included---on a completely new track---that might save the chief's 
life.  The downside: both you and tribe will be on an entirely different 
path--determined by completely unknown histories.   You might save the 
chief's life, but on that new path, an errant virus (from a different 
history) could hit the tribe, wiping it out.  Worse, to save the chief's 
life, you as a shaman would (for a few weeks or months) have no access to 
the future.  By performing a miracle you'd be placing the entire 
tribe--as well as yourself--onto a completely different path, with 
different histories and thus different rules (though the history would 
*seem* the same).


Would you take the chance and shake things up?  Or would you keep the tribe 
on the familiar world line and end up losing the chief (and everyone's 
confidence.)


RM




Re: Equivalence

2005-06-03 Thread rmiller

At 10:23 AM 6/3/2005, Stephen Paul King wrote:

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, 
coincidence and synchronicity are some form resonance between 
decohered QM systems. Could it be that decoherence is not an all or 
nothing process; could it be that some 'parts' of a QM system decohere 
with respect to each other while others do not and/or that decoherence 
might occur at differing rates within a QM system?


Stephen


Yes, that's what I am suggesting.  The rates may remain constant---i.e. 
less than a few milliseconds (as Patrick L. earlier noted) however, I 
suspect there is a topology where regions of decoherence coexist and border 
regions of coherence.  An optics experiment might be able to test this (if 
it hasn't been done already), and it might be experimentally testable as a 
psychology experiment.


RM






- Original Message - From: rmiller [EMAIL PROTECTED]
To: Stathis Papaioannou [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; everything-list@eskimo.com

Sent: Friday, June 03, 2005 1:07 AM
Subject: Equivalence



Equivalence
If the individual exists simultaneously across a many-world manifold, 
then how can one even define a copy?  If the words match at some points 
and differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is 
an abstraction of the real thing---even if the real world only consists 
of one version.  If it consists of several versions, blended into one 
another, then how can we  differentiate between them?  From a 
mathematical POV, 200 worlds that are absolute copies of themselves, are 
equivalent to one world. If these worlds differ minutely in areas *not 
encountered or interacted with by the percipient (individual), then again 
we have one percipient, one world-equivalent.   I suspect it's not as 
though we're all run through a Xerox and distributed to countless 
(infinite!) places that differ broadly from one another.  I rather think 
the various worlds we inhabit are equivalent--and those that differ from 
one another do by small--though perceptible---degrees.  Some parts of the 
many-world spectrum are likely equivalent and others are not.  In 
essence, there are probably zones of equivalence (your room where there 
are no outside interferences) and zones of difference.  Even if we did 
manage to make the copies, then there would still be areas on the various 
prints that would be equivalent, i.e. the same.   Those that are 
different, we would notice and possibly tag these differences with a 
term: decoherence.  Perhaps that is all there is to it.   If this is the 
case, it would certainly explain a few things: i.e. precognition, 
coincidence and synchronicity.


R. Miller







Re: Equivalence

2005-06-03 Thread rmiller

At 11:27 AM 6/3/2005, rmiller wrote:

At 10:23 AM 6/3/2005, Stephen Paul King wrote:

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, 
coincidence and synchronicity are some form resonance between 
decohered QM systems. Could it be that decoherence is not an all or 
nothing process; could it be that some 'parts' of a QM system decohere 
with respect to each other while others do not and/or that decoherence 
might occur at differing rates within a QM system?


Stephen


Yes, that's what I am suggesting.  The rates may remain constant---i.e. 
less than a few milliseconds (as Patrick L. earlier noted) however, I 
suspect there is a topology where regions of decoherence coexist and 
border regions of coherence.  An optics experiment might be able to test 
this (if it hasn't been done already), and it might be experimentally 
testable as a psychology experiment.\\


More to the point---Optical experiments in QM often return counterintuitive 
results, but they support the QM math (of course).  No one has 
satisfactorily resolved the issue of measurement to everyone's liking, but 
most would agree that in some brands of QM consciousness plays a role.  On 
one side we have Fred Alan Wolf and Sarfatti who seem to take the qualia 
approach, while on the other side we have those like Roger Penrose who (I 
think) take a mechanical view (microtubules in the brain harbor 
Bose-Einstein condensates.)   All this model-building (and discussion) is 
fine, of course, but there are a number of psychological experiments out 
there that consistently return counterintuitive and heretofore 
unexplainable results.  Among them, is Helmut Schmidt's retro pk 
experiment which consistently returns odd results.  The PEAR lab at 
Princeton has some startling remote viewing results, and of course, 
there's Rupert Sheldrake's work.   As far as I know, Sheldrake is the only 
one who has tried to create a model (morphic resonance), and most QM 
folks typically avoid discussing the experiments--except to deride them as 
nonscientific.  I think it may be time to revisit some of these ESP 
experiments to see if the results are telling us something in terms of QM, 
i.e. decoherence.   Changing our assumptions about decoherence, then 
applying the model to those strange experiments may clarify things.


RM



RM






- Original Message - From: rmiller [EMAIL PROTECTED]
To: Stathis Papaioannou [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; everything-list@eskimo.com

Sent: Friday, June 03, 2005 1:07 AM
Subject: Equivalence



Equivalence
If the individual exists simultaneously across a many-world manifold, 
then how can one even define a copy?  If the words match at some 
points and differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is 
an abstraction of the real thing---even if the real world only 
consists of one version.  If it consists of several versions, blended 
into one another, then how can we  differentiate between them?  From a 
mathematical POV, 200 worlds that are absolute copies of themselves, are 
equivalent to one world. If these worlds differ minutely in areas *not 
encountered or interacted with by the percipient (individual), then 
again we have one percipient, one world-equivalent.   I suspect it's not 
as though we're all run through a Xerox and distributed to countless 
(infinite!) places that differ broadly from one another.  I rather think 
the various worlds we inhabit are equivalent--and those that differ from 
one another do by small--though perceptible---degrees.  Some parts of 
the many-world spectrum are likely equivalent and others are not.  In 
essence, there are probably zones of equivalence (your room where there 
are no outside interferences) and zones of difference.  Even if we did 
manage to make the copies, then there would still be areas on the 
various prints that would be equivalent, i.e. the same.   Those that are 
different, we would notice and possibly tag these differences with a 
term: decoherence.  Perhaps that is all there is to it.   If this is the 
case, it would certainly explain a few things: i.e. precognition, 
coincidence and synchronicity.


R. Miller








Re: Equivalence

2005-06-03 Thread rmiller


At 01:46 PM 6/3/2005, rmiller wrote:

(snip)


What do you mean by the qualia approach? Do you mean a sort of 
dualistic view of the relationship between mind and matter? From the 
discussion at http://www.fourmilab.ch/rpkp/rhett.html it seems that 
Sarfatti suggests some combination of Bohm's interpretation of QM (where 
particles are guided by a 'pilot wave') with the idea of adding a 
nonlinear term to the Schrodinger equation (contradicting the existing 
'QM math', which is entirely linear), and he identifies the pilot wave 
with the mind and has some hand-wavey notion that life involves some 
kind of self-organizing feedback loop between the pilot wave and the 
configuration of particles (normally Bohm's interpretation says the 
configuration of particles has no effect on the pilot wave, but that's 
where the nonlinear term comes in I guess). Since Bohm's interpretation 
is wholly deterministic, I'd think Sarfatti's altered version would be 
too, the nonlinear term shouldn't change this.



Seems to me you've described the qualia approach pretty well.




while on the other
side we have those like Roger Penrose who (I think) take a mechanical 
view (microtubules in the brain harbor Bose-Einstein condensates.)


Penrose's proposal has nothing to do with consciousness collapsing the 
wavefunction, he just proposes that when a system in superposition 
crosses a certain threshold of *mass* (probably the Planck mass), then it 
collapses automatically. The microtubule idea is more speculative, but 
he's just suggesting that the brain somehow takes advantage of 
not-yet-understood quantum gravity effects to go beyond what computers 
can do, but the collapse of superposed states in the brain would still be 
gravitationally-induced.


Penrose has a *lot* of things to say about QM---and his new book has the 
best description of fibre bundles I've seen in quite a while---but no, I 
didn't mean to suggest his entire argument was based on BECs in the 
microtubules.  I suggested Penrose because his approach seems diametrically 
opposed to the qualia guys.





  All this model-building (and discussion) is fine, of
course, but there are a number of psychological experiments out there 
that consistently return counterintuitive and heretofore unexplainable 
results.
Among them, is Helmut Schmidt's retro pk experiment which consistently 
returns odd results.  The PEAR lab at Princeton has some startling 
remote viewing results, and of course, there's Rupert Sheldrake's 
work.   As far as I know, Sheldrake is the only one who has tried to 
create a model (morphic resonance), and most QM folks typically avoid 
discussing the experiments--except to deride them as nonscientific.  I 
think it may be time to revisit some of these ESP experiments to see 
if the results are telling us something in terms of QM, i.e. 
decoherence.   Changing our assumptions about decoherence, then applying 
the model to those strange experiments may clarify things.


RM


Here's a skeptical evaluation of some of the ESP experiments you mention:

http://web.archive.org/web/20040603153145/www.btinternet.com/~neuronaut/webtwo_features_psi_two.htm

Anyway, if it were possible for the mind to induce even a slight 
statistical bias in the probability of a bit flipping 1 or 0, then simply 
by picking a large enough number of trials it would be possible to very 
reliably insure that the majority would be the number the person was 
focusing on. So by doing multiple sets with some sufficiently large 
number N of trials in each set, it would be possible to actually send 
something like a 10-digit bit string (for example, if the majority of 
digits in the first N trials came up 1, you'd have the first digit of 
your 10-digit string be a 1), something which would not require a lot of 
tricky statistical analysis to see was very unlikely to occur by chance. 
If the retro-PK effect you mentioned was real, this could even be used 
to reliably send information into the past!


I spoke with Schmidt in '96.  He told me that it is very unlikely that 
causation can be reversed, but rather that the retropk results suggest many 
worlds.


When these ESP researchers are able to do a straightforward demonstration 
like this, that's when I'll start taking these claims seriously, until 
then extraordinary claims require extraordinary evidence.


The extraordinary claims---evidence rule is good practical guidance, but 
it's crummy science.  Why should new results require an astronomical Z 
score, when proven results need only a Z of 1.96?  Think about the poor 
fellow who discovered that ulcers were caused by helicobacter 
pylori---took him ten years for science to take him seriously, and then 
only after he drank a vial of h.pylori broth himself.   Then there's the 
fellow at U of I (Ames) who believed that Earth is being pummeled by 
snowballs--as big as houses--from space.  He was thoroughly derided (some 
demanded he be fired) for ten years or so---until a UV

Re: Do things constantly get bigger?

2005-06-03 Thread rmiller

At 01:28 PM 6/3/2005, Norman Samish wrote:

Hal,
Your phrase . . . constantly get bigger reminds me of Mark
McCutcheon's The Final Theory where he revives a notion that gravity is
caused by the expansion of atoms.
Norman


That's the excuse I use.
RM




- Original Message -
From: Hal Finney [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Friday, June 03, 2005 8:59 AM
Subject: Re: Many Pasts? Not according to QM...


Saibal Mitra writes:
 This is actualy another argument against QTI. There are only a finite
 number
 of different versions of observers. Suppose a 'subjective' time evolution
 on
 the set of all possible observers exists that is always well defined.
 Suppose we start with observer O1, and under time evolution it evolves to
 O2, which then evolves to O3 etc. Eventually an On will be mapped back to
 O1
 (if this never happened that would contradict the fact that there are only
 a
 finite number of O's). But mapping back to the initial state doesn't
 conserve memory. You can thus only subjectively experience yourself
 evolving
 for a finite amount of time.

Unless... you constantly get bigger!  Then you could escape the
limitations of the Bekenstein bound.

Hal Finney





Re: Equivalence

2005-06-03 Thread rmiller

At 04:40 PM 6/3/2005, rmiller wrote:

At 03:25 PM 6/3/2005, you wrote:



(snip)
I spoke with Schmidt in '96.  He told me that it is very unlikely that 
causation can be reversed, but rather that the retropk results suggest 
many worlds.


But that is presumably just his personal intuition, not something that's 
based on any experimental data (like getting a message from a possible 
future or alternate world, for example).


Actually, he couldn't say why the result came out the way it did.  His 
primary detractor back then, was Henry Stapp---whom Schmidt invited to take 
part in the experiment.  After which Stapp modified his views somewhat.





When these ESP researchers are able to do a straightforward 
demonstration like this, that's when I'll start taking these claims 
seriously, until then extraordinary claims require extraordinary evidence.

(snip)


The issue is not the Z score in isolation, it's 1) whether we trust that 
the correct statistical analysis has been done on the data to obtain that 
Z score (whether reporting bias has been eliminated, for example)--that's 
why I suggested the test of trying to transmit a 10-digit number using 
ESP, which would be a lot more transparent--and 2) whether we trust that 
the possibility of cheating has been kept small enough, which as the 
article I linked to suggested, may not have been met in the PEAR results:



Suspicions have hardened as sceptics have looked more closely at the 
fine detail of Jahn's results. Attention has focused on the fact that one 
of the experimental subjects - believed actually to be a member of the 
PEAR lab staff - is almost single-handedly responsible for the 
significant results of the studies. It was noted as long ago as 1985, in 
a report to the US Army by a fellow parapsychologist, John Palmer of 
Durham University, North Carolina, that one subject - known as operator 
10 - was by far the best performer. This trend has continued. On the most 
recently available figures, operator 10 has been involved in 15 percent 
of the 14 million trials yet contributed a full half of the total excess 
hits. If this person's figures are taken out of the data pool, scoring in 
the low intention condition falls to chance while high intention 
scoring drops close to the .05 boundary considered weakly significant in 
scientific results.


First, you're right about that set of the PEAR results, but operator 10 was 
involved in the original anomalies experiments---she was not involved in 
the remote viewing (as I understand).  But p0.05 is weakly 
significant?  Hm. It was good enough for Fisher. . .it's good enough for 
the courts (Daubert).



Sceptics like James Alcock and Ray Hyman say naturally it is a serious 
concern that PEAR lab staff have been acting as guinea pigs in their own 
experiments. But it becomes positively alarming if one of the staff - 
with intimate knowledge of the data recording and processing procedures - 
is getting most of the hits.


I agree, but again, I don't think Operator 10 was involved in all the 
experiments. Have any of these skeptics tried to replicate?  I believe Ray 
Hyman is an Oregon State English Prof, so he probably couldn't replicate 
some of the PEAR lab work, but surely there are others who could.



Alcock says t(snip) . . . distort Jahn's results. 


If Hyman and Alcock believe Jahn et al were cheating, then they shouldn't 
mince words; instead, they should file a complaint with Princeton.




Of course, both these concerns would be present in any statistical test, 
even one involving something like the causes of ulcers like in the quote 
you posted above, but here I would use a Bayesian approach and say that 
we should start out with some set of prior probabilities, then update 
them based on the data. Let's say that in both the tests for ulcer causes 
and the tests for ESP our estimate of the prior probability for either 
flawed statistical analysis or cheating on the part of the experimenters 
is about the same. But based on what we currently know about the way the 
world works, I'd say the prior probability of ESP existing should be far, 
far lower than the prior probability that ulcers are caused by bacteria. 
It would be extremely difficult to integrate ESP into what we currently 
know about the laws of physics and neurobiology. If someone can propose a 
reasonable theory of how it could work without throwing everything else 
we know out the window, then that could cause us to revise these priors 
and see ESP as less of an extraordinary claim, but I don't know of any 
good proposals (Sarfatti's seems totally vague on the precise nature of 
the feedback loop between the pilot wave and particles, for example, and 
on how this would relate to ESP phenomena...if he could provide a 
mathematical model or simulation showing how a simple brain-like system 
could influence the outcome of random quantum events in the context of 
his theory, then it'd be a different story).


A couple

Re: Functionalism and People as Programs

2005-06-03 Thread rmiller


At 10:58 PM 6/3/2005, you wrote:

R. Miller writes (quoting Lee Corbin):


If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a 
bad metaphor.


Losing one atom in every thousand cells is not the same as losing the cell 
itself. Cells are a constant work in progress. Bits fall off, 
transcription errors occur in the process of making proteins, radiation or 
noxious chemicals damage subcellular components, and so on. The machinery 
of the cell is constantly at work repairing all this damage. It is like a 
building project where the builders only just manage to keep up with the 
wreckers. Eventually, errors accumulate or the blueprints are corrupted 
and the cell dies. Taking the organism as a whole, the effect of all this 
activity is like the ship of Theseus: over time, even though it looks like 
the same organism, almost all the matter in it has been replaced.


That's correct, of course.  I'm finishing up a book on nuclear fallout, 
and most of my selves were obviously immersed in radiation issues rather 
than simple mathematics.  Sorry.



RM





experience = sum over histories?

2005-06-02 Thread rmiller

At 11:20 AM 6/2/2005, Hal Finney wrote:

(snip)

All these examples are meant to show that we act as though we care about
giving good experiences even though we know they will be forgotten and
not have lasting impact.  If we extend that principle more generally,
I think it follows that we should try to have good experiences on days
when we have high measure.

Hal Finney


I've always thought that QM offered great tools for social scientists, and 
here's another example.   Is it worthwhile to consider a life as the sum of 
experiences along a given track of the world line, or can we borrow from 
Feynman and view life as a sum over histories?  If so, it might explain 
false memories, love at first sight and coincidence.


Richard Miller





Equivalence

2005-06-02 Thread rmiller

Equivalence
If the individual exists simultaneously across a many-world manifold, then 
how can one even define a copy?  If the words match at some points and 
differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is an 
abstraction of the real thing---even if the real world only consists of 
one version.  If it consists of several versions, blended into one another, 
then how can we  differentiate between them?  From a mathematical POV, 200 
worlds that are absolute copies of themselves, are equivalent to one world. 
If these worlds differ minutely in areas *not encountered or interacted 
with by the percipient (individual), then again we have one percipient, one 
world-equivalent.   I suspect it's not as though we're all run through a 
Xerox and distributed to countless (infinite!) places that differ broadly 
from one another.  I rather think the various worlds we inhabit are 
equivalent--and those that differ from one another do by small--though 
perceptible---degrees.  Some parts of the many-world spectrum are likely 
equivalent and others are not.  In essence, there are probably zones of 
equivalence (your room where there are no outside interferences) and zones 
of difference.  Even if we did manage to make the copies, then there would 
still be areas on the various prints that would be equivalent, i.e. the 
same.   Those that are different, we would notice and possibly tag these 
differences with a term: decoherence.  Perhaps that is all there is to 
it.   If this is the case, it would certainly explain a few things: i.e. 
precognition, coincidence and synchronicity.


R. Miller




Re: Functionalism and People as Programs

2005-06-02 Thread rmiller

At 11:20 PM 6/2/2005, Lee Corbin wrote:

Stephen writes

 I really do not want to be a stick-in-the-mud here, but what do we 
base

 the idea that copies could exist upon?

It is a conjecture called functionalism (or one of its close variants).


Functionalism, at least, in the social sciences refers to the proposition 
that everything exists because it has a function (use).  When that notion 
came under attack in the 1960s, structural functionalists responded that 
some things have latent functions--uses that we have yet to 
divine.  Functionalism follows Scholasticism which follows teleology.  Not 
particularly good science---or at least, not *modern* science.




 What if I, or any one else's 1st person aspect, can not be copied?
 If the operation of copying is impossible, what is the status of all
 of these thought experiments?


Still pretty robust.  If you accept that a chronon has a dimension equal to 
about 10^-43 seconds, then you'd have to concede that we exist as a deck 
of copies through time. No big deal, but we ARE copies of the individual we 
were 1 x 10-^43 seconds ago.  If not, where's the glue?




I notice that many people seek refuge in the no-copying theorem of
QM. Well, for them, I have that automobile travel also precludes
survival.  I can prove that to enter an automobile, drive it somewhere,
and then exit the automobile invariably changes the quantum state of
the person so reckless as to do it.

If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a bad 
metaphor.



Moreover---I am not really picky about the exact
bound state of each atom, just so long as it is able to perform the
role approximately expected of it.


Structural functionalism.  When physicists converse at a bar, they talk the 
language of sociology.




(That is, go ahead and remove any
carbon atom you like, and replace it by another carbon atom in a
different state.)

 If, and this is a HUGE if, there is some thing irreducibly quantum
 mechanical to this 1st person aspect then it follows from QM that 
copying
 is not allowed. Neither a quantum state nor a qubit can be copied 
without

 destroying the original.


What if there is *no* original copy?  Those that are familiar with 
Photoshop would probably argue that each layer created is still an integral 
part of the image.  If you accept Cramer's transactional model, then what 
*will* take place in the future will affect the state of the past.   You 
don't suppose Julian Barbour is on to something?


R. Miller





Re: Plaga

2005-05-26 Thread rmiller

At 06:58 PM 5/24/2005, rmiller wrote:

In a recent post (5/24) I wrote. . .

I would suggest re Plaga or anyone else discussed here, it's not the time 
spent in a particular academic trench that makes the idea great, it's the 
quality of the insight.
As luck, coincidence or a wide specious present would have it, we have this 
story in Wired re Peter Lynds: 
http://www.wired.com/wired/archive/13.06/physics.html



R.Miller


*(and Elvis Costello was a computer programmer---the list goes on.)







RE: Sociological approach, luck, and the WTC surge cloud

2005-05-25 Thread rmiller
 Faculty Lounge over the old tritium 
storage pit behind the maintenance shed.


RMiller 





RE: Sociological approach

2005-05-24 Thread rmiller


At 07:15 AM 5/24/2005, you wrote:

Richard M writes

 I remember Plaga's original post on the Los Alamos
 archives way back when the server there was a 386.
 Most of the methods I've seen--Plaga's, Fred Alan
 Wolf's, and others involve tweaking the mortar, so
 to speak---prying apart the wallboard to obtain
 evidence of the next room over.

 Since all I'm interested in is whether behavior systems
 incorporate knowledge of clearly defined probabilities
 that may exist in the next lane over (so to speak)--I
 would like to make a modest proposal---

 Assemble a hundred college students...in a double-blind
 experiment to determine their awareness of occult but
 clearly defined probabilities.

 Here's how: set up a random number generator that will
 return a value on a screen--say 1 through 50 (or whatever
 object set you'd like).  Tell the students it's a random
 number generator that will return a perfectly random
 result, and you'd like to see how good they are at
 guessing a value just before it appears.  Pay the
 student a nominal sum each time she gets the value
 correct.  Debit the student a small amount each time
 she gets it incorrect--so they'll have something
 invested in the outcome.

How, essentially, does this differ from the casino game of
roulette?


Because we don't have a finite set of probabilities to compare the 
responses against.  Hypothetical: We watch a roulette player at Monte 
Carlo.  Then, we reach down into our case and bring out our QM probability 
viewer and switch it on.  Now, in addition to the central scene, we see ten 
versions of the same player (and roulette) each differing only in 
probability from the original.  As a result, each scene shows a different 
number winning.  Luckily, we have the newest model QM viewer, so with each 
version a number flashes on the screen that shows the probability of this 
win being the one we saw originally.  Of the ten, some would likely have a 
lower probability of occurring and some would have a higher probability.


Since we have no QM viewer, we have to stack the deck (so to speak) and 
limit the number of probabilities per run to a set quantity.  Of course, it 
could be fairly argued that MW is far more resilient and pervasive and that 
some version of us (or the machine) would choose different values and 
sets--thus muddling the results.  But on the off-chance that MW is somewhat 
more stable, I think we may see subjects that can accurately assess hidden 
probabilities.   As before, if it is found that we routinely sample 
probability space this might involve brain processes that developed 
through evolution---but would also suggest that consciousness exists as an 
object in probability space.  Hilgard's experiments can be interpreted to 
suggest that.  His book, incidentally, is Divided Consciousness by Wiley 
Interscience.  reprinted in '88, I believe.




As for the latter, roulette has been played so very much
that by now there would have been almost enough time to
evolve people who were good at it.


And there are people who are good at it.  Everyone calls them lucky which 
really doesn't explain much.  Some of us routinely choose the wrong queue, 
others get the correct one (queuing theory and probability offer good 
explanations for this sort of thing, but other factors may simply involve 
an ability to sample alternate worlds.


Richard





Re: Plaga

2005-05-24 Thread rmiller

All,
In my recent post I noted that Plaga's article has been on the xxx site 
since their server was a 386.  I want to be clear that my comment was not 
meant as a dig at Plaga, nor his paper--just that it has been around since 
'95 and I can't recall anyone commenting (constructively) on it.  As for 
astute knowledge in the QM Codex being a requirement, I seem to recall 
that, before Ed Whitten took an interest in physics, his undergrad degree 
was in History.  Einstein was a---well, we all know what Einstein was 
during his miracle year.*


I would suggest re Plaga or anyone else discussed here, it's not the time 
spent in a particular academic trench that makes the idea great, it's the 
quality of the insight.


R.Miller


*(and Elvis Costello was a computer programmer---the list goes on.)




Re: Plaga

2005-05-24 Thread rmiller

At 07:51 PM 5/24/2005, Hal Finney wrote:

We discussed Plaga's paper back in June, 2002.  I reported some skeptical
analysis of the paper by John Baez of sci.physics fame, at
http://www.escribe.com/science/theory/m3686.html .  I also gave some
reasons of my own why arbitrary inter-universe quantum communication
should be impossible.

Hal Finney


I don't recall that discussion; may not have been a list subscriber at that 
time.  At any rate, thanks for the info.


RMiller 





Re: Sociological approach

2005-05-23 Thread rmiller

Patrick--

At 05:04 AM 5/23/2005, you wrote:


On Sun, 22 May 2005, rmiller wrote:




I'm approaching this as a sociologist with some physics background so I'm 
focusing on what the behavior system perceives (measures). If all 
possible worlds exist in a superpositional state, then the behavior 
system should likewise exist in a superpositional state.


First, it looks like you are confusing the multiverse of QM with the 
plenitude of all theories or all UTM programs (Level 3 with Level 4 
multiverse in Tegmark's terminology). Different level 4 worlds do not 
superpose, they don't relate to each other in any way, by definition.



(snip)
 Behaviour systems are complicated enough that it is a mathematical 
certainty that they fall in the second class.
That depends on how one characterizes them.  I'm describing a behavior 
system that is described as a snapshot of interactions between 
elements.  It's an abstraction, of course, but not all that far removed 
from, say, a snapshot of a neural net.



 In which case there is no way to detect that the superposition is 
happening; for all practical purposes each world goes its own sweet way.


No.  Probabilities differ by a small degree across z space, but there are 
not necessarily discrete differences.  It would be infinite in the sense 
that a continuum is infinite, or that a line contains an infinite number of 
infinitesimals.



If there are say, 10 possible worlds available to the behavioral state 
(percipient) but each world differs from the other by elements that are 
not observed by the percipient, then the behavior system is under the 
assumption that interaction is taking place with a single, unified environment.


Recalling the Copenhagen interpretation: does Chicago exist if you happen 
to be by yourself in a hotel room in Des Plaines, IL?  The answer is 
irrelevant until the behavior system begins to experience some aspect of 
Chicago.


The superposition properties depend on the information available in the 
whole system (e.g. your hotel room), not just the mind of the observer.
That's a very basic assumption, of course---one that cannot be proven 
without measurement.  Obviously the source material (whatever that is) is 
available for the behavior system to define as discrete bits of 
information, but the hard fact is, we're assuming we know the mathematical 
characteristics of this source material when we really don't.



The world is constantly in close touch with itself.


Yes it is.  But we have characterized this matrix of information based 
upon interesting experiments that study the mortar between the bricks (as 
it were).  Inferring much more gets us into great discussions of whether 
the universe is really a big computer and leads to films like, well, The 
Matrix.  As Abraham Kaplan (1964) said, when we don't know something, we 
don't know it.  And we really don't know much about the character of the 
information that constitutes the world.  Let's take a look at the 
assumptions about Chicago, for example:


For instance, if Chicago vanished in a large quantum fluctuation photons 
which would otherwise have been reflected from its streets to the clouds 
would be different.


We're assuming that photons (rather than probabilities) exist independently 
of our observations and measurements of them.  While obviously something is 
out there that when measured will fit the profile of a photon, it's a 
stretch to suggest that it can exist *as we know it* independently of our 
observation.  We don't know the properties of out there very well, so 
perhaps we shouldn't assume that reflection and even distance are 
relevant.  Our observations that lead us to the concept of entanglement 
lead us to assume the entangled objects are separated by distance when 
distance is, let's face it, an abstraction.  (There was only one article 
that has ever called distance into question, and it appeared in Omni 
magazine a few months before it's demise.  I'll say it before you: Maybe 
that was the reason it finally failed---it was heading in the direction of 
Hume with no Descartes to rescue it.)


 Hence photons leaving (assumption: separation) the clouds that land 
(assumption: separation) in fields 40 miles away (assumption: distance) 
would be different and so on. Very soon (within microseconds) the photons 
coming through your hotel window are affected, and you become 100% 
correlated with the state of Chicago (assumption: we know the phase state 
of Chicago---that it is commensurate with collapsed probabilities 
associated with a quantum fluctuation resulting in photons becoming 
separated with an object and impinging on another object, etc.  Lots of 
collapsed probabilities here with no measurement in sight--and no proof 
that Chicago exists independent of individual measurement.  It's not just 
a limitation, it's an assumption--and maybe an improper one.  Broadly 
(I'm not talking about Copenhagen, here) we generally assume that because 
the object has

Re: Sociological approach

2005-05-23 Thread rmiller



At 07:29 PM 5/23/2005, you wrote:
I think I can answer to the
whole message by saying no way isn't always the
way. The EPR paradox was supposed to prove quantum theory was wrong
because it supposedly violated relativity. Alain Aspect proved that EPR
actually worked as advertised, however it does so without violating
relativity. Likewise I think there are ways that information, and perhaps
other things, may be able to tunnel between worlds, despite the
decoherence problem, of which I am well aware. Besides, Plaga has an
experiment that is waiting to be tried that would prove other universes -

http://arxiv.org/abs/quant-ph/9510007 . Time will tell, but I think
history is on my side.
I remember Plaga's original post on the Los Alamos archives way back when
the server there was a 386. Most of the methods I've
seen--Plaga's, Fred Alan Wolf's, and others involve tweaking the mortar,
so to speak---prying apart the wallboard to obtain evidence of the next
room over. 
Since all I'm interested in is whether behavior systems incorporate
knowledge of clearly defined probabilities that may exist in the next
lane over (so to speak)--I would like to make a modest
proposal---
Assemble a hundred college students (a hundred will return a respectable
Z score) in a double-blind experiment to determine their awareness of
occult but clearly defined probabilities. 
Here's how: set up a random number generator that will return a value on
a screen--say 1 through 50 (or whatever object set you'd like).
Tell the students it's a random number generator that will return a
perfectly random result, and you'd like to see how good they are at
guessing a value just before it appears. Pay the student a nominal
sum each time she gets the value correct. Debit the student a small
amount each time she gets it incorrect--so they'll have something
invested in the outcome. 
There's always a catch and here's this one: the values aren't
really random, but are chosen (double-blind) to result in TWO
randomly-chosen sets. These sets are transferred to a disc and
placed in the RNG which then randomly picks which set to
show--and which to keep in a state of unrealized probability. Of
course, the researcher won't know either--until after the fact.
The experiment begins. One set of values gets shown to the student
(immediately after they guess at the value). The other
set remains as an unrealized probability. 
If the student do not probe probability space then the number
of guessed values from the unrealized set should not be
significant. On the other hand, if the students guess by actually
probing nearby probabilities (i.e. the next lane over), then the number
of guessed values in the unrealized set should be
significant. Given the nature of this experiment, I'd
support a minimum z of 1.96 as a criteria---p.05. And no
meta-analysis allowed. 
It seems to be a relatively easy experiment to try--RNG software is
available (though some algorithms, I hear, are not as random as they
should be.) 
Comments welcome---
R Miller



Sociological approach

2005-05-22 Thread rmiller



I'm approaching this as a sociologist with some physics background so I'm 
focusing on what the behavior system perceives (measures). If all 
possible worlds exist in a superpositional state, then the behavior system 
should likewise exist in a superpositional state.  If there are say, 10 
possible worlds available to the behavioral state (percipient) but each 
world differs from the other by elements that are not observed by the 
percipient, then the behavior system is under the assumption that 
interaction is taking place with a single, unified environment.


Recalling the Copenhagen interpretation: does Chicago exist if you happen 
to be by yourself in a hotel room in Des Plaines, IL?  The answer is 
irrelevant until the behavior system begins to experience some aspect of 
Chicago.


What if Deutsch is incorrect about contact between the various 
worlds?  Suppose the behavior system normally exists across a manifold of 
closely-linked probabilities, with the similarities forming a central 
tendency and the differences existing at each edge of the distribution? If 
the behavior system can perceive only a small chunk of information at a 
time, then it may be possible that each percipient really does live in his 
or her own little world---a small island of similar probabilities 
madereal from the larger cloud of probabilities.


If we quantify a behavior system in terms of elements and interactions 
between elements, we arrive at a complex, but definable state.  If that 
behavior system exists across multiple worlds that differ in minute details 
(i.e. a unobserved kitchen saucer moved an inch to the side) then the 
behavior systems would exist as identical entities (or, as my friend Giu P. 
would say, *shadows*) across the similar sections.   Employing a little 
math, the behavior system could exist as an object in Z space--not too 
different than a fibre bundle in topology.Differences among the 
realized probabilities among these shadow worlds might show up at each 
end of the normal distribution, but may be still be perceived by the 
behavior system as guesses or hunches, depending upon where the primary 
centre of the behavioral bundle is at the time.  Psychology experiments in 
the 1980s suggest (to me anyway) that a psychological mechanism has evolved 
that helps the behavioral system negotiate this territory.


Bottom line, it may be useful to take a step back and challenge some of our 
primary assumptions---namely, that we exist in a discrete world in the 
multiverse and that we can never step into the one next door.   That is, 
we may be wondering why we can't visit the next room, when in fact, we 
inhabit the entire neighborhood.


RMiller







Re: An All/Nothing multiverse model

2004-11-16 Thread rmiller
This is starting to sound like discussion Hume must have had with himself.
RM



Re: Frank Flynn

2003-11-02 Thread rmiller
It's a chatterbot. Considering the poor syntax and misspelled words, it was 
probably designed by a Russian teen.

RMiller 




Quantum accident survivor

2003-10-30 Thread rmiller
It would seem that there are a finite number of ways to survive (or die in) 
any given car accident.  It that's the case, the number of many world 
branches would be limited by this value. Taken longitudinally, it would 
seem that the architecture of the world lines of these and similar events 
would limit the number of worlds associated with the individual.  That is, 
after such a life-threatening event, the number of multiple copies of the 
individual become limited.

R. Miller