What are monads ? Can they help us to understand the dark energy universe ?

2013-08-16 Thread Roger Clough
What are monads ? Can they help us to understand the dark energy universe ?

I don't know about you, but I have trouble seeing how it all works 
from just particle physics. Monads or substances or concepts are   
a possible starting point of viewing the universe in the large instead
of as elementary particles. And the beautiful thing about them is
that Leibniz rejected basing his philosophy on atoms, so that it
applies to energy fields such as dark energy as well. 

To do so we must enter the world of metaphysics in order to understand events  
more complex or larger than single events,. 

http://en.wikipedia.org/wiki/Universals#Problem_of_universals 

The noun universal contrasts with individual, while the adjective 
universal  
contrasts with particular. Paradigmatically, universals are abstract (e.g. 
humanity),  
whereas particulars are concrete (e.g. the person of Socrates).  

Perhaps the chief obstacle to understanding Leibniz is his invention and use of 
the monad, 
as defined in his Monadology. The reason is that L wanted to base his 
metaphysics on 
what could be well defined as individual or unitary concepts, atoms of 
thought.  
This implies a whole without parts, both physically and conceptually. That 
means that the  
monads are all alive. Since they are alive they are constantly changing. 

Atoms are excluded, one reason (my reason) being that, according to Heisenberg, 
the  
definition of a particle must be uncertain. 

Monads are these unitary or partless bodies or concepts, which cannot be 
subdivided  
into parts without destroying their identity or meaning, thus killling them. 
The mind of an  
individual is a monad, brain is a monad, and body is a monad, self within mind 
within  
brain within body. 

A monad can thus contain myriad monads, both by splitting at a given level 
but also containing monads at lower levels. 

It may be that we divide a monad such as humanity into parts which are people 
monads. 

It may be that the Self contains a conscious monad and an unconscious one. 

There are three classes of monad, those called 'spirits (which we would call 
souls). 
which are monads with intellect, secondly Souls or animals and plants that are 
sensitive to the environment and can feel pain, and lowest of all, bare, naked 
 
monads, such as rocks, which are sleepy, dull, and unfeeling.  

We cannot identify things in this way if instead we use atomic theory or 
materialism,  
which as we shall see, gives L's metaphysics enormous power beyond that of 
materialism. 
Bertrand Russell and perhaps many of you will find monads to be, to use R's 
woirds, 
a fairy tail, but as long as you are careful, and follow logic rather than 
folk ideas, 
you will gain enormous power to understand the world.  

Leibniz, besides his providing us with monads, also essentially gives us a kit 
of metaphysical tools (rather than encyclopedic treatises) that we ourselves  
can use to explore the world. 



Dr. Roger B Clough NIST (ret.) [1/1/2000] 
See my Leibniz site at 
http://independent.academia.edu/RogerClough 




Dr. Roger B Clough NIST (ret.) [1/1/2000] 
See my Leibniz site at 
http://independent.academia.edu/RogerClough

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread John Clark
On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella cdemorse...@yahoo.comwrote:

 When will a computer pass the Turing Test? Are we getting close? Here is
 what the CEO of Google says: “Many people in AI believe that we’re close to
 [a computer passing the Turing Test] within the next five years,” said Eric
 Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on
 July 16, 2013.

 It could be. Five years ago I would have said we were a very long way from
any computer passing the Turing Test, but then I saw Watson and its
incredible performance on Jeopardy.  And once a true AI comes into
existence it will turn ALL scholarly predictions about what the future will
be like into pure nonsense, except for the prediction that we can't make
predictions that are worth a damn after that point.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread Telmo Menezes
On Fri, Aug 16, 2013 at 3:42 PM, John Clark johnkcl...@gmail.com wrote:
 On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella cdemorse...@yahoo.com
 wrote:

  When will a computer pass the Turing Test? Are we getting close? Here is
  what the CEO of Google says: “Many people in AI believe that we’re close to
  [a computer passing the Turing Test] within the next five years,” said Eric
  Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on 
  July
  16, 2013.

 It could be. Five years ago I would have said we were a very long way from
 any computer passing the Turing Test, but then I saw Watson and its
 incredible performance on Jeopardy.  And once a true AI comes into existence
 it will turn ALL scholarly predictions about what the future will be like
 into pure nonsense, except for the prediction that we can't make predictions
 that are worth a damn after that point.

I don't really find the Turing Test that meaningful, to be honest. My
main problem with it is that it is a test on our ability to build a
machine that deceives humans into believing it is another human. This
will always be a digital Frankenstein because it will not be the
outcome of the same evolutionary context that we are. So it will have
to pretend to care about things that it is not reasonable for it to
care.

I find it a much more worthwhile endeavour to create a machine that
can understand what we mean like a human does, without the need to
convince us that it has human emotions and so on. This machine would
actually be _more_ useful and _more_ interesting by virtue of not
passing the Turing test.

Telmo.

   John K Clark

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Freebits

2013-08-16 Thread Craig Weinberg
What new perspectives would you say are revealed in the paper? Can you sum 
them up?

Craig

On Friday, August 16, 2013 1:50:04 AM UTC-4, Brent wrote:

  Here's a fascinating essay by Scott Aronson that is a really scientific, 
 operational exposition on the question of 'free will'; one which takes my 
 idea that if you solve the engineering problem you may solve the 
 philosophical problem along the way and does much more with it than I could.

 He also discusses how the entanglement of the brain with the environment 
 affects personal identity as in Bruno Marchal's duplication thought 
 experiments.

 He also discusses Stenger's idea of the source of the arrow of time 
 (secton 5.4) and Boltzmann brains.

 Brent

 The Ghost in the Quantum Turing Machine
 Scott Aaronson
 (Submitted on 2 Jun 2013 (v1), last revised 7 Jun 2013 (this version, v2))

 In honor of Alan Turing's hundredth birthday, I unwisely set out some 
 thoughts about one of Turing's obsessions throughout his life, the question 
 of physics and free will. I focus relatively narrowly on a notion that I 
 call Knightian freedom: a certain kind of in-principle physical 
 unpredictability that goes beyond probabilistic unpredictability. Other, 
 more metaphysical aspects of free will I regard as possibly outside the 
 scope of science. I examine a viewpoint, suggested independently by Carl 
 Hoefer, Cristi Stoica, and even Turing himself, that tries to find scope 
 for freedom in the universe's boundary conditions rather than in the 
 dynamical laws. Taking this viewpoint seriously leads to many interesting 
 conceptual problems. I investigate how far one can go toward solving those 
 problems, and along the way, encounter (among other things) the No-Cloning 
 Theorem, the measurement problem, decoherence, chaos, the arrow of time, 
 the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic 
 information theory, and the Common Prior Assumption. I also compare the 
 viewpoint explored here to the more radical speculations of Roger Penrose. 
 The result of all this is an unusual perspective on time, quantum 
 mechanics, and causation, of which I myself remain skeptical, but which has 
 several appealing features. Among other things, it suggests interesting 
 empirical questions in neuroscience, physics, and cosmology; and takes a 
 millennia-old philosophical debate into some underexplored territory. 

 Comments: 85 pages (more a short book than a long essay!), 2 figures. 
 To appear in The Once and Future Turing: Computing the World, a 
 collection edited by S. Barry Cooper and Andrew Hodges. And yes, I know 
 Turing is 101 by now. v2: Corrected typos
 Subjects: Quantum Physics (quant-ph); General Literature (cs.GL); 
 History and Philosophy of Physics (physics.hist-ph)
 Cite as: arXiv:1306.0159 [quant-ph]
   (or arXiv:1306.0159v2 [quant-ph] for this version)
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread John Clark
On Fri, Aug 16, 2013 at 11:04 AM, Telmo Menezes te...@telmomenezes.comwrote:

 I don't really find the Turing Test that meaningful, to be honest.


I am certain that in your like you have met some people that you consider
brilliant and some that are as dumb as a sack full of doorknobs, if it's
not the Turing test how did you differentiate the geniuses from the
imbeciles?

 I find it a much more worthwhile endeavour to create a machine that can
 understand what we mean


And the only way you can tell if a machine (or another human being)
understands what you mean or not is by observing the subsequent behavior.

 like a human does, without the need to convince us that it has human
 emotions


Some humans are VERY good at convincing other humans that they have certain
emotions when they really don't, like actors or con-men; evolution has
determined that skillful lying can be useful.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread Telmo Menezes
On Fri, Aug 16, 2013 at 5:25 PM, John Clark johnkcl...@gmail.com wrote:
 On Fri, Aug 16, 2013 at 11:04 AM, Telmo Menezes te...@telmomenezes.com
 wrote:

  I don't really find the Turing Test that meaningful, to be honest.


 I am certain that in your like you have met some people that you consider
 brilliant and some that are as dumb as a sack full of doorknobs, if it's not
 the Turing test how did you differentiate the geniuses from the imbeciles?

  I find it a much more worthwhile endeavour to create a machine that can
  understand what we mean


 And the only way you can tell if a machine (or another human being)
 understands what you mean or not is by observing the subsequent behavior.

I completely agree.

However, the Turing test is a very specific instance of a subsequent
behavior test. It's one where a machine is asked to be
undistinguishable from a human being when communicating through a text
terminal. This will entail a lot of lying. (e.g: what do you look
like?). It's a hard goal, and it will surely help AI progress, but
it's not, in my opinion, an ideal goal.

  like a human does, without the need to convince us that it has human
  emotions


 Some humans are VERY good at convincing other humans that they have certain
 emotions when they really don't, like actors or con-men; evolution has
 determined that skillful lying can be useful.

Sure, it's useful. I'm actually of the opinion that hypocrisy is our
most important intellectual skill. The ability to advertise certain
norms and then not follow them helped build civilization.

But a subtle problem with the Turing test is that it hides one of the
hurdles (in my important, the most significant hurdle) with the
progress in AI: defining precisely what the problem is. The Turing
test is a toy test.

Cheers
Telmo.


 John K Clark

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Determinism - Tricks of the Trade

2013-08-16 Thread Craig Weinberg


The objection that the terms ‘consciousness’ or ‘free will’ are used in too 
many different ways to be understandable is one of the most common 
arguments that I run into. I agree that it is a superficially valid 
objection, but on deeper consideration, it should be clear that it is a 
specious and ideologically driven detour.

The term *free will* is not as precise as a more scientific term might be 
(I tend to use *motive*, *efferent participation*, or *private intention*), 
but it isn’t nearly the problem that it is made to be in a debate. Any 
eight year old knows well enough what free will refers to. Nobody on Earth 
can fail to understand the difference between doing something by accident 
and intentionally, or between enslavement and freedom. The claim that these 
concepts are somehow esoteric doesn’t wash, unless you already have an 
expectation of a kind of verbal-logical supremacy in which nothing is 
allowed to exist until we can agree on a precise set of terms which give it 
existence. I think that this expectation is not a neutral or innocuous 
position, but actually contaminates the debate over free will, stacking the 
deck unintentionally in favor of the determinism.

It’s subtle, but ontologically, it is a bit like letting a burglar talk you 
into opening up the door to the house for them since breaking a window 
would only make a mess for you to clean up. Because the argument for hard 
determinism begins with an assumption that impartiality and objectivity are 
inherently desirable in all things, it asks that you put your king in check 
from the start. The argument doubles down on this leverage with the 
implication that subjective intuition is notoriously naive and flawed, so 
that not putting your king in check from the start is framed as a weak 
position. This is the James Randi kind of double-bind. If you don’t submit 
to his rules, then you are already guilty of fraud, and part of his rules 
are that you have no say in what his rules will be.

This is the sleight of hand which is also used by Daniel Dennett as well. 
What poses as a fair consideration of hard determinism is actually a 
stealth maneuver to create determinism – to demand that the subject submit 
to the forced disbelief system and become complicit in undermining their 
own authority. The irony is that it is only through a personal/social, 
political attack on subjectivity that the false perspective of objectivity 
can be introduced. It is accepted only by presentation pf an argument of 
personal insignificance so that the subject is shamed and bullied into 
imagining itself an object. Without knowing it, one person’s will has been 
voluntarily overpowered and confounded by another person’s free will into 
accepting that this state of affairs is not really happening. In presenting 
free will and consciousness as a kind of stage magic, the materialist 
magician performs a meta-magic trick on the audience.

Some questions for determinist thinkers:

   - Can we effectively doubt that we have free will?
   Or is the doubt a mental abstraction which denies the very capacity for 
   intentional reasoning upon which the doubt itself is based?
   - How would an illusion of doubt be justified, either randomly or 
   deterministically? What function would an illusion of doubt serve, even in 
   the most blue-sky hypothetical way?
   - Why wouldn’t determinism itself be just as much of an illusion as free 
   will or doubt under determinism?

Another common derailment is to conflate the position of recognizing the 
phenomenon of subjectivity as authentic with religious faith, naive 
realism, or soft-headed sentimentality. This also is ironic, as it is an 
attack on the ego of the subject, not on the legitimacy of the issue. There 
is no reason to presume any theistic belief is implied just because 
determinism can be challenged at its root rather than on technicalities. To 
challenge determinism at its root requires (appropriately) the freedom to 
question the applicability of reductive reasoning to reason itself. The 
whole question of free will is to what extent it is an irreducible 
phenomenon which arises at the level of the individual. This question is 
already rendered unspeakable as soon as the free will advocate agrees to 
the framing of the debate in terms which require that they play the role of 
cross-examined witness to the prosecutor of determinism.

As soon as the subject is misdirected to focus their attention on the 
processes of the sub-personal level, a level where the individual by 
definition does not exist, the debate is no longer about the experience of 
volition and intention, but of physiology. The ‘witness’ is then invited to 
give a false confession, making the same mistake that the prosecutor makes 
in calling the outcome of the debate before it even begins. The foregone 
conclusion that physiological processes define psychological experiences 
entirely is used to justify itself, and the deterministic 

Re: When will a computer pass the Turing Test?

2013-08-16 Thread meekerdb

On 8/16/2013 8:04 AM, Telmo Menezes wrote:

On Fri, Aug 16, 2013 at 3:42 PM, John Clark johnkcl...@gmail.com wrote:

On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella cdemorse...@yahoo.com
wrote:


When will a computer pass the Turing Test? Are we getting close? Here is
what the CEO of Google says: “Many people in AI believe that we’re close to
[a computer passing the Turing Test] within the next five years,” said Eric
Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on July
16, 2013.

It could be. Five years ago I would have said we were a very long way from
any computer passing the Turing Test, but then I saw Watson and its
incredible performance on Jeopardy.  And once a true AI comes into existence
it will turn ALL scholarly predictions about what the future will be like
into pure nonsense, except for the prediction that we can't make predictions
that are worth a damn after that point.

I don't really find the Turing Test that meaningful, to be honest. My
main problem with it is that it is a test on our ability to build a
machine that deceives humans into believing it is another human. This
will always be a digital Frankenstein because it will not be the
outcome of the same evolutionary context that we are. So it will have
to pretend to care about things that it is not reasonable for it to
care.


I agree, and so did Turing.  He proposed the test just as a was to make a small testable 
step toward intelligence - he didn't consider it at all definitive.  Interestingly the 
test he actually proposed was to have a man and a computer each pretend to be a woman, and 
success would be for the computer to succeed in fooling the tester as often as the man.


Brent




I find it a much more worthwhile endeavour to create a machine that
can understand what we mean like a human does, without the need to
convince us that it has human emotions and so on. This machine would
actually be _more_ useful and _more_ interesting by virtue of not
passing the Turing test.

Telmo.


   John K Clark

--
You received this message because you are subscribed to the Google Groups
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Question for the QM experts here: quantum uncertainty of the past

2013-08-16 Thread meekerdb

On 8/15/2013 6:18 AM, smi...@zonnet.nl wrote:

Citeren meekerdb meeke...@verizon.net:


On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
I guess I don't understand that.   You seem to be considering a simple case of 
amnesia - all purely classical - so I don't see how MWI enters at all.  The 
probabilities are just ignorance uncertainty.  You're still in the same branch of the 
MWI, you just don't remember why your memory was erased (although you may read about 
it in your diary).


No, you can't say that you are in the same branch. Just because you are in the 
clasical regime doesn't mean that the MWI is irrelevant and we can just pretend that 
the world is described by classical physics. It is only that classical physics will 
give the same answer as QM when computing probabilities.


Including the probability that I'm in the same world as before?

With classical I mean a single world theory where you just compute the probabilities 
based ignorance. This yields the same answer as assuming the MWI and then comouting 
the probabilities of the various outcomes.




If what you are aware of is only described by your memory state which can be encoded 
by a finite number of bits, then after a memory resetting, the state of your memory 
and the environment (which contains also the rest of your brain and body), is of the 
form:


The rest of my brain??  Why do you suppose that some part of my brain is involved in 
my memories and not other parts?  What about a scar or a tattoo.  I don't see that 
memory is separable from the environment.  In fact isn't that exactly what makes 
memory classical and makes the superposition you write below impossible to achieve? 
Your brain is a classical computer because it's not isolated from the environment.


What matter is that the state is of the form:

|memory_1|environment_1 + |memory_2|environment_2+..

with the |memory_j orthonormal and the |environment_j orthogonal. Such a completely 
correlated state will arise due to decoherence, the probabilities which are the squared 
norms of the |environment_j's are the probabilities. They behave in a purely classical 
way due this decomposition.


The brain is never isolated from the environment; if project onto an |environment_j you 
always get a definite classical memory state, never a supperposition of different 
bitstrings. But it's not the case that projecting onto a ddefinite memory state will 
always yield a definite classical environment state (this is at the heart of the  
Wigner's friend thought experiment).


I think Wigner's friend has been overtaken by decoherence.  While I agree with what you 
say above, I disagree that the |environment_i are macroscopically different.  I think you 
are making inconsistent assumptions: that memory is something that can be reset 
without resetting its physical environment and yet still holding that memory is classical.




So, I am assuming that the brain is 100% classical (decoherence has run its complete 
course), whatever the memory state of the brain is can also be found in the environment.


Then the assumption that I'm making is that whenever there is information in the 
environment that the observer is not aware of, 


What does aware of mean?...physically encoded somewhere?  present in consciousness as a 
thought?...a sentence?  You seem to be implicitly invoking a dualism whereby awareness and 
memory can be changed in ways physical things can't.


Brent

the observer will be identical as far as the description of the observer in terms of its 
memory state is concerned accross the branches where that information is different. So, 
if the initial state is:


|memory|environment

and in the environment something happens which has two possible outcomes, and you have 
yet to learn about that, then the state will evolve to a state of the form:


|memory(|environment_1 + |environment_2)

and not:

|memory_1|environment_1 + |memory_2|environment_2

because the latter would imply that you could (in principle) tell what happned without 
performing a measurement, and I don't believe on psychic phenomena.


So, the no-psychic phenomena postulate would compel you to assume that:

|memory(|environment_1 + |environment_2)


is the correct description of the state and that only after you learn about the fact you 
become localized in either branch. This applied to the memory resetting implies what I 
was arguing for.


Saibal




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Freebits

2013-08-16 Thread meekerdb
That the world is unpredictable because the initial conditions are unknown and this is 
different from probabilistically unpredicability, aka randomness, because you don't even 
know a probability distribution.  He speculates that chaotic amplification might allow 
this to account for what he calls Knightian freedom (after Frank Knight) as a component 
of what is usually called free will and which Aronson says can mean no more than 
unpredictable in principle.


Brent

On 8/16/2013 8:53 AM, Craig Weinberg wrote:

What new perspectives would you say are revealed in the paper? Can you sum them 
up?

Craig

On Friday, August 16, 2013 1:50:04 AM UTC-4, Brent wrote:

Here's a fascinating essay by Scott Aronson that is a really scientific, 
operational
exposition on the question of 'free will'; one which takes my idea that if 
you solve
the engineering problem you may solve the philosophical problem along the 
way and
does much more with it than I could.

He also discusses how the entanglement of the brain with the environment 
affects
personal identity as in Bruno Marchal's duplication thought experiments.

He also discusses Stenger's idea of the source of the arrow of time (secton 
5.4) and
Boltzmann brains.

Brent

The Ghost in the Quantum Turing Machine
Scott Aaronson
(Submitted on 2 Jun 2013 (v1), last revised 7 Jun 2013 (this version, v2))

In honor of Alan Turing's hundredth birthday, I unwisely set out some 
thoughts
about one of Turing's obsessions throughout his life, the question of 
physics and
free will. I focus relatively narrowly on a notion that I call Knightian 
freedom:
a certain kind of in-principle physical unpredictability that goes beyond
probabilistic unpredictability. Other, more metaphysical aspects of free 
will I
regard as possibly outside the scope of science. I examine a viewpoint, 
suggested
independently by Carl Hoefer, Cristi Stoica, and even Turing himself, that 
tries to
find scope for freedom in the universe's boundary conditions rather than 
in the
dynamical laws. Taking this viewpoint seriously leads to many interesting 
conceptual
problems. I investigate how far one can go toward solving those problems, 
and along
the way, encounter (among other things) the No-Cloning Theorem, the 
measurement
problem, decoherence, chaos, the arrow of time, the holographic principle, 
Newcomb's
paradox, Boltzmann brains, algorithmic information theory, and the Common 
Prior
Assumption. I also compare the viewpoint explored here to the more radical
speculations of Roger Penrose. The result of all this is an unusual 
perspective on
time, quantum mechanics, and causation, of which I myself remain skeptical, 
but
which has several appealing features. Among other things, it suggests 
interesting
empirical questions in neuroscience, physics, and cosmology; and takes a
millennia-old philosophical debate into some underexplored territory.

Comments: 85 pages (more a short book than a long essay!), 2 figures. 
To appear
in The Once and Future Turing: Computing the World, a collection edited 
by S.
Barry Cooper and Andrew Hodges. And yes, I know Turing is 101 by now. v2: 
Corrected
typos
Subjects: Quantum Physics (quant-ph); General Literature (cs.GL); 
History and
Philosophy of Physics (physics.hist-ph)
Cite as: arXiv:1306.0159 [quant-ph]
  (or arXiv:1306.0159v2 [quant-ph] for this version)

--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

No virus found in this message.
Checked by AVG - www.avg.com http://www.avg.com
Version: 2013.0.3392 / Virus Database: 3211/6581 - Release Date: 08/15/13



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-16 Thread meekerdb

On 8/16/2013 11:01 AM, Craig Weinberg wrote:
Nobody on Earth can fail to understand the difference between doing something by 
accident and intentionally,


Really?  Intentionally usually means with conscious forethought.  But the Grey Walter and 
Libet experiments make it doubtful that consciousness of intention precedes the decision.


Remember when nobody on Earth could doubt that the Sun traveled across the dome of the sky 
and the Earth was flat.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-16 Thread Craig Weinberg


On Friday, August 16, 2013 2:45:56 PM UTC-4, Brent wrote:

  On 8/16/2013 11:01 AM, Craig Weinberg wrote:
  
 Nobody on Earth can fail to understand the difference between doing 
 something by accident and intentionally,


 Really?� Intentionally usually means with conscious forethought.� But 
 the Grey Walter and Libet experiments make it doubtful that consciousness 
 of intention precedes the decision.


Cognition is not necessary to discern the intentional from the 
unintentional. A salmon who swims upstream does so with more intent than a 
dead salmon floats downstream. Intention is more primitive than thought as 
thought itself is driven by the intention to influence your environment. 

The experiments that you mention do not make intention doubtful at all, 
they only suggest that intention exists at the sub-personal level as well. 
Breaking down events on the scale of an individual person to micro-events 
on which no individual exists is the first mistake. Because intention has 
everything to do with time and causality, we cannot assume that our naive 
experience of time holds true outside of our own perceptual frame. 

The presumption that intention is a complex computational sequence building 
up to a personal feeling of taking action voluntarily unnecessarily biases 
the bottom-up view. I think that what is actually going on is that time 
itself is a relativistic measure which extends from more fundamental 
sensory qualities of significance, rhythm, and memory. This means that 
personal time happens on a personally scaled inertial frame - just as c is 
a velocity which is infinite within any given inertial frame, our 
experience of exercising our will is roughly instantaneous. The exercise of 
will relates to our context, so seeking faster, sub-personal inertial 
frames for insight is like trying to measure the plot of a movie by 
analyzing the patterns of pixels on the screen. It does not illuminate the 
physics of will, it obscures it.



 Remember when nobody on Earth could doubt that the Sun traveled across the 
 dome of the sky and the Earth was flat.


The perception that the Earth is flat is more important that the knowledge 
that the Earth is round. The sophisticated view is useful for some 
purposes, but the native view is indispensable. With free will it is not 
enough to know that the world is round, we must know why it seems flat, and 
why the flat seeming and round seeming are both true in their own context.

Thanks,
Craig


 Brent
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread Telmo Menezes
On Fri, Aug 16, 2013 at 7:10 PM, meekerdb meeke...@verizon.net wrote:
 On 8/16/2013 8:04 AM, Telmo Menezes wrote:

 On Fri, Aug 16, 2013 at 3:42 PM, John Clark johnkcl...@gmail.com wrote:

 On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella
 cdemorse...@yahoo.com
 wrote:

 When will a computer pass the Turing Test? Are we getting close? Here
 is
 what the CEO of Google says: “Many people in AI believe that we’re
 close to
 [a computer passing the Turing Test] within the next five years,” said
 Eric
 Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on
 July
 16, 2013.

 It could be. Five years ago I would have said we were a very long way
 from
 any computer passing the Turing Test, but then I saw Watson and its
 incredible performance on Jeopardy.  And once a true AI comes into
 existence
 it will turn ALL scholarly predictions about what the future will be like
 into pure nonsense, except for the prediction that we can't make
 predictions
 that are worth a damn after that point.

 I don't really find the Turing Test that meaningful, to be honest. My
 main problem with it is that it is a test on our ability to build a
 machine that deceives humans into believing it is another human. This
 will always be a digital Frankenstein because it will not be the
 outcome of the same evolutionary context that we are. So it will have
 to pretend to care about things that it is not reasonable for it to
 care.


 I agree, and so did Turing.  He proposed the test just as a was to make a
 small testable step toward intelligence - he didn't consider it at all
 definitive.  Interestingly the test he actually proposed was to have a man
 and a computer each pretend to be a woman, and success would be for the
 computer to succeed in fooling the tester as often as the man.

Two deep mysteries in a single test!

Telmo.

 Brent




 I find it a much more worthwhile endeavour to create a machine that
 can understand what we mean like a human does, without the need to
 convince us that it has human emotions and so on. This machine would
 actually be _more_ useful and _more_ interesting by virtue of not
 passing the Turing test.

 Telmo.

John K Clark

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread Chris de Morsella
Telmo ~ I agree, all the Turing test does is indicate that a computer, 
operating independently -- that is without a human operator supplying any 
answers during the course of the test -- can fool a human (on average) that 
they are dialoging with another person and not with a computer. While this is 
an important milestone in AI research -- it is just a stand in for any actual 
potential real intelligence or awareness. 
 
Increasingly computers are not programmed in the sense of being provided with a 
deterministic instruction set - no matter how complex and deep. Increasingly 
computer code is being put through its own Darwinian process using techniques 
such as genetic algorithms, automata etc. Computers are in the process of being 
turned into self learning code generation engines that increasingly are able to 
write their own operational code.
 
An AI entity would probably be able to easily pass the Turing test - not that 
hard of a challenge after all for an entity with almost immediate access to a 
huge cultural memory it can contain. However it may not care that much to try.
 
Another study -- I think by Stanford researchers, but I don't have the link 
handy though -- has found that the world's top super computers (several of 
which they were able to test) are currently scoring around the same as an 
average human four year old. The scores were very uneven across various areas 
of intelligence that the standardized IQ tests or four year olds tries to 
measure, as would be expected (after all a super computer is not a four year 
old person). 
 
Personally I think that AI will let us know when it has arisen by whatever 
means it chooses to let us know. That it will know itself what it wants to do, 
and that this knowing for itself and acting for itself will be the hallmark 
event that AI has arrived on the scene.
 
Cheers,
-Chris D
.
 


 From: Telmo Menezes te...@telmomenezes.com
To: everything-list@googlegroups.com 
Sent: Friday, August 16, 2013 8:04 AM
Subject: Re: When will a computer pass the Turing Test?
  

On Fri, Aug 16, 2013 at 3:42 PM, John Clark johnkcl...@gmail.com wrote:
 On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella cdemorse...@yahoo.com
 wrote:

  When will a computer pass the Turing Test? Are we getting close? Here is
  what the CEO of Google says: “Many people in AI believe that we’re close to
  [a computer passing the Turing Test] within the next five years,” said Eric
  Schmidt, Executive Chairman, Google, speaking at The Aspen Institute on 
  July
  16, 2013.

 It could be. Five years ago I would have said we were a very long way from
 any computer passing the Turing Test, but then I saw Watson and its
 incredible performance on Jeopardy.  And once a true AI comes into existence
 it will turn ALL scholarly predictions about what the future will be like
 into pure nonsense, except for the prediction that we can't make predictions
 that are worth a damn after that point.

I don't really find the Turing Test that meaningful, to be honest. My
main problem with it is that it is a test on our ability to build a
machine that deceives humans into believing it is another human. This
will always be a digital Frankenstein because it will not be the
outcome of the same evolutionary context that we are. So it will have
to pretend to care about things that it is not reasonable for it to
care.

I find it a much more worthwhile endeavour to create a machine that
can understand what we mean like a human does, without the need to
convince us that it has human emotions and so on. This machine would
actually be _more_ useful and _more_ interesting by virtue of not
passing the Turing test.

Telmo.

   John K Clark

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, 

Re: When will a computer pass the Turing Test?

2013-08-16 Thread John Clark
On Fri, Aug 16, 2013  Telmo Menezes te...@telmomenezes.com wrote:

 the Turing test is a very specific instance of a subsequent behavior
 test.


Yes it's specific, to pass the Turing Test the machine must be
indistinguishable from a very specific type of human being, an INTELLIGENT
one; no computer can quite do that yet although for a long time they've
been able  to be  indistinguishable from a comatose human being.


  It's a hard goal, and it will surely help AI progress, but it's not, in
 my opinion, an ideal goal.


If the goal of Artificial Intelligence is not a machine that behaves like a
Intelligent human being then what the hell is the goal?


  But a subtle problem with the Turing test is that it hides one of the
 hurdles (in my important, the most significant hurdle) with the progress in
 AI: defining precisely what the problem is.


The central problem and goal of AI couldn't be more clear, figuring out how
to make something that's smart, that is to say behaves intelligently.

And you've told me that you don't use behavior to determine which of your
acquaintances are geniuses and which are imbeciles, but you still haven't
told me what method you do use.

   John K Clark


  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread meekerdb

On 8/16/2013 1:25 PM, John Clark wrote:
On Fri, Aug 16, 2013  Telmo Menezes te...@telmomenezes.com 
mailto:te...@telmomenezes.com wrote:


 the Turing test is a very specific instance of a subsequent behavior 
test.


Yes it's specific, to pass the Turing Test the machine must be indistinguishable from a 
very specific type of human being, an INTELLIGENT one; no computer can quite do that yet 
although for a long time they've been able  to be  indistinguishable from a comatose 
human being.


 It's a hard goal, and it will surely help AI progress, but it's not, in my
opinion, an ideal goal.


If the goal of Artificial Intelligence is not a machine that behaves like a Intelligent 
human being then what the hell is the goal?


Make a machine that is more intelligent than humans.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-16 Thread Telmo Menezes
On Fri, Aug 16, 2013 at 10:38 PM, meekerdb meeke...@verizon.net wrote:
 On 8/16/2013 1:25 PM, John Clark wrote:

 On Fri, Aug 16, 2013  Telmo Menezes te...@telmomenezes.com wrote:

  the Turing test is a very specific instance of a subsequent behavior
  test.


 Yes it's specific, to pass the Turing Test the machine must be
 indistinguishable from a very specific type of human being, an INTELLIGENT
 one; no computer can quite do that yet although for a long time they've been
 able  to be  indistinguishable from a comatose human being.


  It's a hard goal, and it will surely help AI progress, but it's not, in
  my opinion, an ideal goal.


 If the goal of Artificial Intelligence is not a machine that behaves like a
 Intelligent human being then what the hell is the goal?

A machine that behaves like a intelligent human will be subject to
emotions like boredom, jealousy, pride and so on. This might be fine
for a companion machine, but I also dream of machines that can deliver
us from the drudgery of survival. These machines will probably display
a more alien form of intelligence.


 Make a machine that is more intelligent than humans.

That's when things get really weird.

Telmo.

 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Question for the QM experts here: quantum uncertainty of the past

2013-08-16 Thread smitra

Citeren meekerdb meeke...@verizon.net:


On 8/15/2013 6:18 AM, smi...@zonnet.nl wrote:

Citeren meekerdb meeke...@verizon.net:


On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
I guess I don't understand that.   You seem to be considering a 
simple case of amnesia - all purely classical - so I don't see 
how MWI enters at all.  The probabilities are just ignorance 
uncertainty.  You're still in the same branch of the MWI, you 
just don't remember why your memory was erased (although you may 
read about it in your diary).


No, you can't say that you are in the same branch. Just because 
you are in the clasical regime doesn't mean that the MWI is 
irrelevant and we can just pretend that the world is described by 
classical physics. It is only that classical physics will give the 
same answer as QM when computing probabilities.


Including the probability that I'm in the same world as before?

With classical I mean a single world theory where you just compute 
the probabilities based ignorance. This yields the same answer as 
assuming the MWI and then comouting the probabilities of the various 
outcomes.




If what you are aware of is only described by your memory state 
which can be encoded by a finite number of bits, then after a 
memory resetting, the state of your memory and the environment 
(which contains also the rest of your brain and body), is of the 
form:


The rest of my brain??  Why do you suppose that some part of my 
brain is involved in my memories and not other parts?  What about a 
scar or a tattoo.  I don't see that memory is separable from the 
environment.  In fact isn't that exactly what makes memory 
classical and makes the superposition you write below impossible to 
achieve? Your brain is a classical computer because it's not 
isolated from the environment.


What matter is that the state is of the form:

|memory_1|environment_1 + |memory_2|environment_2+..

with the |memory_j orthonormal and the |environment_j orthogonal. 
Such a completely correlated state will arise due to decoherence, 
the probabilities which are the squared norms of the 
|environment_j's are the probabilities. They behave in a purely 
classical way due this decomposition.


The brain is never isolated from the environment; if project onto an 
|environment_j you always get a definite classical memory state, 
never a supperposition of different bitstrings. But it's not the 
case that projecting onto a ddefinite memory state will always yield 
a definite classical environment state (this is at the heart of the  
Wigner's friend thought experiment).


I think Wigner's friend has been overtaken by decoherence.  While I 
agree with what you say above, I disagree that the |environment_i 
are macroscopically different.  I think you are making inconsistent 
assumptions: that memory is something that can be reset without 
resetting its physical environment and yet still holding that 
memory is classical.




The |environment_i have to be different as they are entangled with 
different memory states, precisely due to rapid decoherence. The 
environment always knows exactly what happened. So, the assumption is 
not that the environment doesn't know what has been done (decoherence 
implies that the environment does know), rather that the the person 
whose memory is reset doesn't know why the memory was reset.


So, if you have made a copy of the memory, the system files etc., there 
is no problem to reboot the system later based on these copies. Suppose 
that the computer is running an artificially intelligent system in a 
virtual environment, but such that this virtual environment is modeled 
based on real world data. This is actually quite similar to how the 
brain works, what you experience is a virtual world that the brain 
creates, input from your senses is used to update this model, but in 
the end it's the model of reality that you experience (which leaves 
quite a lot of room for magicians to fool you).


Then immediately after rebooting, you won't yet have any information 
that is in the environment about why you decided to reboot. You then 
have macroscopically different environments where the reason for 
rebooting is different but where you are identical. If not, and you 
assume that in each environment your mental state is different, then 
that contradicts the assumption about the abilty to reboot based on the 
old system files.


So, you need to learn from the environment what happened before this 
information can affect you. This does not mean that the memory is not 
classical, rather that it's immune to noise from the environment, this 
allows it to perform reliable computations. So, while the environment, 
of course, does affect the physical state of the computer, the 
computational states of the computer are represented by macroscopic 
bits which can be kept isolated.


Saibal


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this 

Re: Question for the QM experts here: quantum uncertainty of the past

2013-08-16 Thread meekerdb

On 8/16/2013 4:57 PM, smi...@zonnet.nl wrote:

Citeren meekerdb meeke...@verizon.net:


On 8/15/2013 6:18 AM, smi...@zonnet.nl wrote:

Citeren meekerdb meeke...@verizon.net:


On 8/14/2013 6:41 PM, smi...@zonnet.nl wrote:
I guess I don't understand that.   You seem to be considering a simple case of 
amnesia - all purely classical - so I don't see how MWI enters at all.  The 
probabilities are just ignorance uncertainty.  You're still in the same branch of 
the MWI, you just don't remember why your memory was erased (although you may read 
about it in your diary).


No, you can't say that you are in the same branch. Just because you are in the 
clasical regime doesn't mean that the MWI is irrelevant and we can just pretend that 
the world is described by classical physics. It is only that classical physics will 
give the same answer as QM when computing probabilities.


Including the probability that I'm in the same world as before?

With classical I mean a single world theory where you just compute the probabilities 
based ignorance. This yields the same answer as assuming the MWI and then comouting 
the probabilities of the various outcomes.




If what you are aware of is only described by your memory state which can be encoded 
by a finite number of bits, then after a memory resetting, the state of your memory 
and the environment (which contains also the rest of your brain and body), is of the 
form:


The rest of my brain??  Why do you suppose that some part of my brain is involved 
in my memories and not other parts? What about a scar or a tattoo.  I don't see that 
memory is separable from the environment.  In fact isn't that exactly what makes 
memory classical and makes the superposition you write below impossible to achieve? 
Your brain is a classical computer because it's not isolated from the environment.


What matter is that the state is of the form:

|memory_1|environment_1 + |memory_2|environment_2+..

with the |memory_j orthonormal and the |environment_j orthogonal. Such a completely 
correlated state will arise due to decoherence, the probabilities which are the 
squared norms of the |environment_j's are the probabilities. They behave in a purely 
classical way due this decomposition.


The brain is never isolated from the environment; if project onto an |environment_j 
you always get a definite classical memory state, never a supperposition of different 
bitstrings. But it's not the case that projecting onto a ddefinite memory state will 
always yield a definite classical environment state (this is at the heart of the  
Wigner's friend thought experiment).


I think Wigner's friend has been overtaken by decoherence. While I agree with what you 
say above, I disagree that the |environment_i are macroscopically different.  I think 
you are making inconsistent assumptions: that memory is something that can be reset 
without resetting its physical environment and yet still holding that memory is 
classical.




The |environment_i have to be different as they are entangled with different memory 
states, precisely due to rapid decoherence. The environment always knows exactly what 
happened. So, the assumption is not that the environment doesn't know what has been 
done (decoherence implies that the environment does know), rather that the the person 
whose memory is reset doesn't know why the memory was reset.


So, if you have made a copy of the memory, the system files etc., there is no problem to 
reboot the system later based on these copies. Suppose that the computer is running an 
artificially intelligent system in a virtual environment, but such that this virtual 
environment is modeled based on real world data. This is actually quite similar to how 
the brain works, what you experience is a virtual world that the brain creates, input 
from your senses is used to update this model, but in the end it's the model of reality 
that you experience (which leaves quite a lot of room for magicians to fool you).


Then immediately after rebooting, you won't yet have any information that is in the 
environment about why you decided to reboot. You then have macroscopically different 
environments where the reason for rebooting is different but where you are identical. 


But that's where I disagree - not about the conclusion, but about the possibility of the 
premise.  I don't think it's possible to erase, in the quantum sense, just your memory.  
Of course you can given a drug that erases short term memory and so it may be possible to 
create a drug that erases long term memory too, i.e. induces amnesia.  But what you 
require is to erase long term memory in a quantum sense so that all the informational 
entanglements with the environment are erased too.  So I don't think you can be to the 
erased memory state you  need.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it,