On 10-08-2019 10:20, Bruce Kellett wrote:
On Sat, Aug 10, 2019 at 6:16 PM smitra <[email protected]> wrote:

On 10-08-2019 09:49, Bruce Kellett wrote:

But when you cannot reach, or ignore, some of this larger number
of
degrees of freedom, you end up with a mixed state. That is how
decoherence reduces the pure state to a mixture on measurement --
there are always degrees of freedom that are not recoverable --
those
infamous IR photons, for example. The brain does not take all
this
entanglement with the environment into account, so it is a
classical
object.


And that step of tracing out the environmental degrees of freedom
is
where we make a mathematical approximation in order to be able to
do
practical calculations. But as you have said in this thread, the
mathematics we use to describe a system is not necessarily a good
physical representation of the system. It's not up to the brain to
decide to not take entanglement into account.

No, the brain has no choice. It simply cannot take these environmental
dof into account. So on its own reckoning, it is a classical object.

It cannot be "classical" in the way we conventionally define it, as that's a concept that cannot exist in the known universe. What we want to do is extract an object out of an entangled state in a physically correct way, instead of a way that yields negligible errors when computing expectation values of generic macroscopic observables, but is physically incorrect.



We may describe the brain
as a classical object, but that doesn't make it so.

Tell me one practical way in which this makes a difference.

There is no practical difference between a collapse interpretation and the MWI, neither is there a practical difference between a theory that says that all planets that are beyond the cosmological horizon are made out of green cheese and the standard astrophysical models.

I do think that sticking to the relevant physics one can learn a great deal more than by invoking irrelevant models. E.g. in thermodynamics we ignore the correlations between the molecules that makes the physical state a specially prepared state w.r.t. inverse time evolution. So, ignoring the correlations and pretending that everything is random is good enough if we focus on being able to predict measurement outcomes, but the fact that the state isn't just any random state follows from the fact that entropy would go down under time reversal while it would increase if the state were truly random.

So, the well known paradoxes in statistical physics go away when we take into account the way we've oversimplified the physics. In the case of QM exactly the same thing happens when using the density matrix formalism and tracing out the environment. You lose the information needed to describe the inverse time evolution correctly. But unlike ion statistical physics, that's not the topic under discussion. The ignored correlations between the degrees of freedom in the brain and the environment do however solve a lot of other paradoxes invoked by people who argue that AI can never generate consciousness.


Let's consider a robot with an electronic brain that runs a well defined algorithm. Then there exists a notion about what algorithm the brain is running, and we may call this a classical description of the electronic brain. We include in the algorithm the exact computational state. The exact description of the physical state involves all the entanglements of all the atoms in the electronic brain and all the other local degrees of freedom in the environment. If we then extract the computational state represented as a bitstring out of this state, then the exact physical state can be written as:

|psi> = |b1>|e1> + |b2>|e2> + |b3>|e3>

where the |bj> are normalized computational states and the |ej> are the unnormalized "environmental" state that include everything except the computational state. Then <ej|ej> is the probability for the system to be in the state |bj>|ej>. So, it also includes the state of the atoms in the brain given whatever computational state the brain is in. Now suppose that the robot is conscious, then what it will know/feel about itself and its local environment will be contained in the bistring describing its computational state, but the mapping from computational states to awareness cannot be one to one. Whatever we are aware of, won't precisely specify the exact computational state defined by what all the neurons are doing at some time. This means that there exists a large number of different |bj>'s that generate the exact same awareness for the robot.

Suppose that the robot is subjectively aware that it prepared the spin of an electron to be polarized in the positive in the x-direction and knows that I measured the spin then before I let the robot know the result of the measurement, the robot will find itself in the state:

|psi> = |up> + |down>]

where

|up> = sum over states where |ej> contains Saibal finding spin up of |bj>|ej>,

|down> = sum over states where |ej> contains Saibal finding spin down of |bj>|ej>.

The robot will thus be in a superposition of two classes of worlds where the result of the spin measurement is different.

Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/62c6efaa431c945db9f0e9158f0b0de5%40zonnet.nl.

Reply via email to