Re: The free will function

2012-02-10 Thread Craig Weinberg
On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/9 Craig Weinberg whatsons...@gmail.com

  On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com wrote:
   2012/2/9 Craig Weinberg whatsons...@gmail.com

  How does a gear or lever have an opinion?

 The problems with gears and levers is dumbness.

Does putting a billion gears and levers together in an arrangement
make them less dumb? Does it start having opinions at some point?

   Does putting a billions neurons together in an arrangement make them less
   dumb ? Does it start having opinions at some point ?

  No, because neurons are living organisms in the first place, not
  gears.

 At which point does it start having an opinions ?

At every point when it is alive. We may not call them opinions because
we use that word to refer to an entire human being's experience, but
the point is that being a living cell makes it capable of having
different capacities than it does as a dead cell. When it is dead,
there is no biological sense going on, only chemical detection-
reaction, which is time reversible. Biological sense isn't time
reversible.

 Why simulated neurons
 couldn't have opinions at that same point ? Vitalism ?

No, because there is no such thing as absolute simulation, there is
only imitation. Simulation is an imitation designed to invite us to
mistake it for genuine - which is adequate for things we don't care
about much, but awareness cannot be a mistake. It is the absolute
primary orientation, so it cannot ever be substituted. If you make
synthetic neurons which are very close to natural neurons on every
level, then you have a better chance of coming close enough that the
resulting organism is very similar to the original. A simulation which
is not made of something that forms a cell by itself (an actual cell,
not a virtual sculpture of a cell) probably has no possibility of
graduating from time reversible detection-reaction to other categories
of sense, feeling, awareness, perception, and consciousness, just as a
CGI picture of a neuron has no chance of producing milliliters of
actual serotonin, acetylcholine, glutamate,etc.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Stephen P. King

On 2/10/2012 7:25 AM, Quentin Anciaux wrote:



2012/2/10 Craig Weinberg whatsons...@gmail.com 
mailto:whatsons...@gmail.com


On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
 2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
   2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  How does a gear or lever have an opinion?

 The problems with gears and levers is dumbness.

Does putting a billion gears and levers together in an
arrangement
make them less dumb? Does it start having opinions at some
point?

   Does putting a billions neurons together in an arrangement
make them less
   dumb ? Does it start having opinions at some point ?

  No, because neurons are living organisms in the first place, not
  gears.

 At which point does it start having an opinions ?

At every point when it is alive. 



That's not true, does a single neuron has an opinion ? two ? a thousand ?

We may not call them opinions 



Don't switch subject.

because
we use that word to refer to an entire human being's experience, but
the point is that being a living cell makes it capable of having
different capacities than it does as a dead cell. 



Yes and so what ? a dead cell *does not* behave like a living cell, 
that's enough.


When it is dead,
there is no biological sense going on, only chemical detection-
reaction, which is time reversible. Biological sense isn't time
reversible.

 Why simulated neurons
 couldn't have opinions at that same point ? Vitalism ?

No, because there is no such thing as absolute simulation, 



There is no need for an absolute simulation... what do you mean by 
absolute ?


there is
only imitation. Simulation is an imitation


no, simulation is not imitation.

designed to invite us to
mistake it for genuine - which is adequate for things we don't care
about much, but awareness cannot be a mistake. It is the absolute
primary orientation, so it cannot ever be substituted. If you make
synthetic neurons which are very close to natural neurons on every
level, then you have a better chance of coming close enough that the
resulting organism is very similar to the original. A simulation which
is not made of something that forms a cell by itself (an actual cell,
not a virtual sculpture of a cell) probably has no possibility of
graduating from time reversible detection-reaction to other categories
of sense, feeling, awareness, perception, and consciousness, just as a
CGI picture


A CGI picture *is a picture* not a simulation.

of a neuron has no chance of producing milliliters of
actual serotonin, acetylcholine, glutamate,etc.


Is it needed for consciousness ? why ?


Craig


Hi,

How would your reasoning work for a virus? Is it alive? I think 
that the notion of being alive is not a property of the parts but of 
the whole.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Stephen P. King

On 2/10/2012 7:49 AM, Quentin Anciaux wrote:



2012/2/10 Stephen P. King stephe...@charter.net 
mailto:stephe...@charter.net


On 2/10/2012 7:25 AM, Quentin Anciaux wrote:



2012/2/10 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
 2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
   2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  How does a gear or lever have an opinion?

 The problems with gears and levers is dumbness.

Does putting a billion gears and levers together in
an arrangement
make them less dumb? Does it start having opinions at
some point?

   Does putting a billions neurons together in an
arrangement make them less
   dumb ? Does it start having opinions at some point ?

  No, because neurons are living organisms in the first
place, not
  gears.

 At which point does it start having an opinions ?

At every point when it is alive. 



That's not true, does a single neuron has an opinion ? two ? a
thousand ?

We may not call them opinions 



Don't switch subject.

because
we use that word to refer to an entire human being's
experience, but
the point is that being a living cell makes it capable of having
different capacities than it does as a dead cell. 



Yes and so what ? a dead cell *does not* behave like a living
cell, that's enough.

When it is dead,
there is no biological sense going on, only chemical detection-
reaction, which is time reversible. Biological sense isn't time
reversible.

 Why simulated neurons
 couldn't have opinions at that same point ? Vitalism ?

No, because there is no such thing as absolute simulation, 



There is no need for an absolute simulation... what do you mean
by absolute ?

there is
only imitation. Simulation is an imitation


no, simulation is not imitation.

designed to invite us to
mistake it for genuine - which is adequate for things we
don't care
about much, but awareness cannot be a mistake. It is the absolute
primary orientation, so it cannot ever be substituted. If you
make
synthetic neurons which are very close to natural neurons on
every
level, then you have a better chance of coming close enough
that the
resulting organism is very similar to the original. A
simulation which
is not made of something that forms a cell by itself (an
actual cell,
not a virtual sculpture of a cell) probably has no possibility of
graduating from time reversible detection-reaction to other
categories
of sense, feeling, awareness, perception, and consciousness,
just as a
CGI picture


A CGI picture *is a picture* not a simulation.

of a neuron has no chance of producing milliliters of
actual serotonin, acetylcholine, glutamate,etc.


Is it needed for consciousness ? why ?


Craig


Hi,

How would your reasoning work for a virus? Is it alive? I
think that the notion of being alive is not a property of the
parts but of the whole.


Is it a question directed to craig or to me ?

Hi,

It is directed at both of you. :-)

Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Boolean Algebra Conjecture (was: Ontological problems of COMP)

2012-02-10 Thread Stephen P. King

On 2/9/2012 3:40 PM, acw wrote:

[SPK]
We must consider the entire range of possible observers and
technological abilities. We cannot limit ourselves to humans with their
current technological abilities. Therefore we cannot put a pre-set limit
on the upper bound. I agree that the machine must be finite, but my
reasoning follows from mathematical considerations. My conjecture is
that the content of experience - the sequence of OMs - of a generic
observer is constrained to be representable by a sequence of Boolean
Algebras of propositions or Free Boolean Algebras
http://en.wikipedia.org/wiki/Free_Boolean_algebra. This restriction
ties the contraints that exist on Boolean Algebras to being countable
(and the compactness of the topological spaces that are their dual) to
the finiteness of what can be observed by an observer. So we do not have
to postulate finiteness separately iff we take the Stone duality as it
has finiteness built in.
To explain this reasoning further, I would like to point out that for a
large number of entities to be able to communicate with each other, it
is necessary that whatever the means of communication might be, it must
be such that what is true for one will be true for all otherwise we get
a situation where The Tree is tall is true for some observers pointing
at a giant redwood while it is false for some other observers pointing
at the same giant redwoods. Communication requires mutual consistency of
propositions and this can only happen if the logic of their means of
communication is bivalent with respect to truth values. Now we can
quibble about this and discuss how in Special Relativistic situations we
can indeed have situations there X caused Y is true for some frames of
reference and Y caused X for some other frame of reference, but this
dilemma can be resolved by considering the effect of a finite speed of
light whose speed is an invariant for all observers, e.g. general
covariance.


Mostly agreed, although my category theory knowledge is limited, so I 
don't know what intuitions led you to that particular Boolean Algebgra 
conjecture about the OMs. One thing that might be worth considering is 
the machine which keeps expanding: consider an AI running on an actual 
Turing Machine (unbounded memory), the actual implementation shouldn't 
matter (be it running directly in some UD or actually living in a 
physical universe where it constantly harvests resources to increase 
its memory), how does your FBA conjecture deal with such 
self-modifying, self-improving, self-extending observers (humans are 
not yet there, obviously we're very good at working with limited 
resources and finite bounded memory at the cost of forgetting). 


Hi ACW,

I have to break the Ontological Problems of COMP up into pieces 
to respond to your important questions. Please remember that this is 
just an embryo of a theory. It has not yet made it to the half-baked 
stage. ;-)


My thought is that the FBAs are not restricted in the number of 
prepositions that they include thus can grow to include new data. It is 
the means by which they are modified that goes to the answer of your 
question. This is conversed by the process of residuation explained in 
http://boole.stanford.edu/pub/ratmech.pdf It is important to note the 
way that dynamics are treated by Pratt.
What I am trying to do is to explicitly deal with the problem of 
time within the conjecture. I will try to explain more of this in 
subsequent mails.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Bruno Marchal


On 10 Feb 2012, at 13:47, Stephen P. King wrote:


On 2/10/2012 7:25 AM, Quentin Anciaux wrote:




2012/2/10 Craig Weinberg whatsons...@gmail.com
On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/9 Craig Weinberg whatsons...@gmail.com

  On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com wrote:
   2012/2/9 Craig Weinberg whatsons...@gmail.com

  How does a gear or lever have an opinion?

 The problems with gears and levers is dumbness.

Does putting a billion gears and levers together in an  
arrangement
make them less dumb? Does it start having opinions at some  
point?


   Does putting a billions neurons together in an arrangement  
make them less

   dumb ? Does it start having opinions at some point ?

  No, because neurons are living organisms in the first place, not
  gears.

 At which point does it start having an opinions ?

At every point when it is alive.

That's not true, does a single neuron has an opinion ? two ? a  
thousand ?


We may not call them opinions

Don't switch subject.

because
we use that word to refer to an entire human being's experience, but
the point is that being a living cell makes it capable of having
different capacities than it does as a dead cell.

Yes and so what ? a dead cell *does not* behave like a living cell,  
that's enough.


When it is dead,
there is no biological sense going on, only chemical detection-
reaction, which is time reversible. Biological sense isn't time
reversible.

 Why simulated neurons
 couldn't have opinions at that same point ? Vitalism ?

No, because there is no such thing as absolute simulation,

There is no need for an absolute simulation... what do you mean  
by absolute ?



there is
only imitation. Simulation is an imitation

no, simulation is not imitation.

designed to invite us to
mistake it for genuine - which is adequate for things we don't care
about much, but awareness cannot be a mistake. It is the absolute
primary orientation, so it cannot ever be substituted. If you make
synthetic neurons which are very close to natural neurons on every
level, then you have a better chance of coming close enough that the
resulting organism is very similar to the original. A simulation  
which

is not made of something that forms a cell by itself (an actual cell,
not a virtual sculpture of a cell) probably has no possibility of
graduating from time reversible detection-reaction to other  
categories
of sense, feeling, awareness, perception, and consciousness, just  
as a

CGI picture

A CGI picture *is a picture* not a simulation.

of a neuron has no chance of producing milliliters of
actual serotonin, acetylcholine, glutamate,etc.

Is it needed for consciousness ? why ?


Craig

Hi,

How would your reasoning work for a virus? Is it alive? I  
think that the notion of being alive is not a property of the  
parts but of the whole.


Which is the very basic idea sustaining comp. But Craig seems to  
defend the opposite idea. He believes that life, sense, and  
consciousness must be present in the part to sum up in the whole. A  
mechanist will insist that it is the property of the whole which is  
responsible for the higher order aptitude, like being able to play  
chess, or having a private experience.


Yet, the case of living and conscious are not entirely equivalent,  
and should be treated differently. The definition of life seems to me  
conventional, but being conscious is everything but conventional.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Time and Concurrency Platonia? (was: Ontological Problems of COMP)

2012-02-10 Thread Stephen P. King

On 2/9/2012 3:40 PM, acw wrote:

[SPK]
I do not see how this deals effectively with the concurrency problem!
:-( Using the Platonia idea is a cheat as it is explicitly unphysical.
But physics by itself does not explain consciousness either (as shown 
by MGA). Maybe I just don't see what the concurrency problem is.

It has no constraints of thermodynamics, no limits on speeds of signals,
no explanation as to how an Ideal Form is defined, e.g. what is the
standard of its perfection, ect. It is no different from the Realm of
God in religious mythos, so what is it doing here in our rational
considerations? Forgive me but I was raised by parents that where
Fundamentalists Believers, so please understand that I have an allergy
to ideas that remind me of the mental prison that I had to work so hard
to escape.
I'm not asking you to share all of Plato's beliefs here. It's merely a 
minimal amount of magic, not unlike the magic you have to accept 
by positing a 3p world. The amount is basically this: arithmetical (or 
computational) sentences have truth values independent of anything 
physical and consciousness/qualia may be how some such arithmetical 
truth feels from the inside. Without at least some axioms, one cannot 
get anywhere, you can't reduce arithmetic to only logic and so on. Why 
would Platonia have to have the same constraints as our physical 
realms - it need only obey to constraints of logic and math, which 
usually means stuff that is contained within the Church Turing Thesis 
and its implications. Speed of signals? If some theory is 
inconsistent, it's only there as part of the reasoning of some other 
machine. Ideal Form? How do you define an integer or the axioms that 
talk about arithmetic?
Popular religious mythos tend to be troublesome because they involve 
*logically impossible* properties being attributed to Gods and other 
beings - things which are inconsistent. It's not like one doesn't 
assume some axioms in any theory - they are there in almost any 
scientific theory. Yet, unlike popular religions, you're free to 
evaluate your hypotheses and use evidence and meta-reasoning to decide 
which one is more likely to be true and then try to use the results of 
such theories to predict how stuff will behave or bet on various things.
Of course, it's not hard to get trapped in a bad epistemology, and I 
can see why you'd be extra skeptical of bad theories, however nobody 
is telling you to believe a theory is true or false, instead it asks 
you to work out the consequences of each theory's axioms (as well as 
using meta-reasoning skills to weed down overly complex theories, if 
you prefer using Occam's) and then either choose to use or not use 
that particular theory depending if the results match your 
observations/expectations/standards/... (if expectations are broken, 
one would either have to update beliefs or theories or both). 

Hi ACW,

What ever the global structure that we use to relate our ideas and 
provide explanations, it makes sense that we do not ignore problems that 
are inconvenient.  A big problem that I have with Platonia is that it 
does not address the appearance of change that we finite semi-autonomous 
beings observe. The problem of time is just a corollary to this. I would 
prefer to toss out any postulates that require *any* magic. Magic is 
like Arsenic poison, every little bit doubles the harmful effects. Why 
do we even need a notion of 3p except as a pedagogical tool?   What we 
need, at least, is a stratification scheme that allows us to represent 
these differences, but we need to understand that in doing this we are 
sneaking in the notion of a 3p that is equivalent to some kind of agent 
whose only mission is to observe differences and that is a fallacy since 
we are trying to explain observers in the first place.


Unless we have some way to handle a fundamental notion of change, 
there is no way to deal with questions of change and time. Please notice 
how many instances we are using verbs in our considerations of COMP 
ideas. Where and how does the change implicit in the verb, as like 
running the UD, obtain? We cannot ignore this. I am highlighting the 
concurrency problem b/c it shows how this problem cannot be ignored. The 
Platonic Realm, especially the Arithmetic Realist one, is by definition 
fixed and static, nothing changes in it at all! How do we get the 
appearance of time from it? It is possible to show how, but the 
proponents of  COMP need to explain this, IMHO. It is incoherent at best 
to make statements like the UD is running on the walls of Platonia. 
How is that even a meaningful claim?
Another problem is the problem of space as we see in the way that 
1p indeterminacy is defined in UDA. We read of a notion of cutting and 
pasting. Cut 'from where and pasted to where? How is the difference 
in position of say, Washington and Moscow, obtain in a Realm that has 
nothing like space? Unless we have a substrate of some kind that 

Re: Free Floating entities (was: Ontological Problems of COMP)

2012-02-10 Thread Stephen P. King

On 2/9/2012 3:40 PM, acw wrote:



Another way to think of it would be in the terms of the Church Turing
Thesis, where you expect that a computation (in the Turing sense) to
have result and that result is independent of all your
implementations, such a result not being changeable in any way or by
anything - that's usually what I imagine by Platonia. It is a bit
mystical, but I find it less mystical than requiring a magical
physical substrate (even more after MGA) - to me the platonic
implementation seems to be the simplest possible explanation. If you
think it's a bad explanation that introduces some magic, I'll respond
that the primitively physical version introduces even more magic.
Making truth changeable or temporal seems to me to be a much stronger,
much more magical than what I'm considering: that arithmetical
sentences do have a truth value, regardless if we know it or not.

[SPK]
I am only asking that we put the abstract world of mathematics on an
even footing with the physical world, I am _not_ asking for a
primitive physical world. I will say again, just because a computation
is independent for any particular implementation that I, you or any one
else is capable of creating does not eliminate the necessity that
somehow it must be implemented physically. Universality of computation
is NOT the severing of computation from its physical implementability.
This is not the same kind of claim as we see of the ultrafinitist and/or
constructivist; it is just a realistic demand that ideas cannot be free
floating entities. We cannot believe in free floating numbers any more
than we can believe in disembodies spirits and ghosts.

What is a non-primitive physical world, what is it based on? 
'Existence'? What is that, sounds primitive to me. If we accept 
'existence' as primitive, how does math and physical arise out of it? 
It seems so general to me that I can't imagine anything at all about 
it, to the point of being a God-like non-theory (although I can 
sympathize with it, just that it cannot be used as a theory because 
it's too general. We'll probably have to settle with something which 
we can discuss, such as a part of math.)
Why is 'physical' implementation so important? Those free floating 
numbers could very well represent the structures that we and our 
universe happen to be and their truths may very well sometimes be this 
thing we call 'consciousness'. As for 'spirits' - how does this 
'consciousness' thing know which body to follow and observe? How does 
it correlate that it must correlate to the physical states present in 
the brain? How does it know to appear in a robotic body or VR 
environment if someone decides to upload their mind (sometime in the 
far future)? What's this continuity of consciousness thing?
Granted that some particular mathematical structure could represent 
the physical, I'm not sure it makes sense gran the physical any more 
meaning than that which we(our bodies) observe as being part of. 


Hi ACW,

A non-primitive world would be a world that is defined by a set 
of communications between observers, however the observers are defined. 
The notion of a cyclical gossiping as used in graph theory gives a 
nice model of how this would work and it even shows a nice toy model of 
thermodynamic entropy. See #58 here 
http://books.google.com/books?id=SbZKSZ-1qrwCpg=PA32lpg=PA32dq=cyclical+gossiping+graph+theorysource=blots=NAvDjdj7u-sig=kk03XrGRBzdVWI09bh_-yrACM64hl=ensa=Xei=jCI1T8TpM4O4tweVgMG_Agsqi=2ved=0CC8Q6AEwAg#v=onepageqf=false 
for a statement of this idea. Also see 
http://mathworld.wolfram.com/Gossiping.html


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Stephen P. King

On 2/10/2012 8:36 AM, Bruno Marchal wrote:


On 10 Feb 2012, at 13:47, Stephen P. King wrote:


On 2/10/2012 7:25 AM, Quentin Anciaux wrote:



2012/2/10 Craig Weinberg whatsons...@gmail.com 
mailto:whatsons...@gmail.com


On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
 2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com
mailto:allco...@gmail.com wrote:
   2012/2/9 Craig Weinberg whatsons...@gmail.com
mailto:whatsons...@gmail.com

  How does a gear or lever have an opinion?

 The problems with gears and levers is dumbness.

Does putting a billion gears and levers together in an
arrangement
make them less dumb? Does it start having opinions at
some point?

   Does putting a billions neurons together in an arrangement
make them less
   dumb ? Does it start having opinions at some point ?

  No, because neurons are living organisms in the first place, not
  gears.

 At which point does it start having an opinions ?

At every point when it is alive. 



That's not true, does a single neuron has an opinion ? two ? a 
thousand ?


We may not call them opinions 



Don't switch subject.

because
we use that word to refer to an entire human being's experience, but
the point is that being a living cell makes it capable of having
different capacities than it does as a dead cell. 



Yes and so what ? a dead cell *does not* behave like a living cell, 
that's enough.


When it is dead,
there is no biological sense going on, only chemical detection-
reaction, which is time reversible. Biological sense isn't time
reversible.

 Why simulated neurons
 couldn't have opinions at that same point ? Vitalism ?

No, because there is no such thing as absolute simulation, 



There is no need for an absolute simulation... what do you mean by 
absolute ?


there is
only imitation. Simulation is an imitation


no, simulation is not imitation.

designed to invite us to
mistake it for genuine - which is adequate for things we don't care
about much, but awareness cannot be a mistake. It is the absolute
primary orientation, so it cannot ever be substituted. If you make
synthetic neurons which are very close to natural neurons on every
level, then you have a better chance of coming close enough that the
resulting organism is very similar to the original. A simulation
which
is not made of something that forms a cell by itself (an actual
cell,
not a virtual sculpture of a cell) probably has no possibility of
graduating from time reversible detection-reaction to other
categories
of sense, feeling, awareness, perception, and consciousness,
just as a
CGI picture


A CGI picture *is a picture* not a simulation.

of a neuron has no chance of producing milliliters of
actual serotonin, acetylcholine, glutamate,etc.


Is it needed for consciousness ? why ?


Craig


Hi,

How would your reasoning work for a virus? Is it alive? I think 
that the notion of being alive is not a property of the parts but 
of the whole.


Which is the very basic idea sustaining comp. But Craig seems to 
defend the opposite idea. He believes that life, sense, and 
consciousness must be present in the part to sum up in the whole. A 
mechanist will insist that it is the property of the whole which is 
responsible for the higher order aptitude, like being able to play 
chess, or having a private experience.


Hi Bruno,

No. Craig can be considered to be exploring the implications of 
Chalmer's claim that consciousness is a fundamental property of the 
physical, like mass, spin and charge, i.e. it is not emergent from 
matter. His concept of sense is not much different from your 1p or the 
content of a simulation.




Yet, the case of living and conscious are not entirely equivalent, 
and should be treated differently. The definition of life seems to me 
conventional, but being conscious is everything but conventional.


We agree on that! Living does seem to be 3p definable while 
conscious is only 1p definable.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread David Nyman
On 10 February 2012 14:08, Stephen P. King stephe...@charter.net wrote:

 No. Craig can be considered to be exploring the implications of
 Chalmer's claim that consciousness is a fundamental property of the
 physical, like mass, spin and charge, i.e. it is not emergent from matter.
 His concept of sense is not much different from your 1p or the content of
 a simulation.

I disagree with this assessment, I think.  ISTM that equating
consciousness with other physical properties inevitably puts one in
the position of having to build up composite entities from the
properties of their components - hence the notorious grain and
binding problems.  The theology of comp, on the other hand, seems
to imply that at some ultimate level consciousness is a symmetric
unity, but that this symmetry is broken, by the internal logic of
comp, into an infinity of views.  Of course, this latter idea can only
make sense in terms of 1p; from the 3p perspective, all that exists is
computation.

David

 On 2/10/2012 8:36 AM, Bruno Marchal wrote:


 On 10 Feb 2012, at 13:47, Stephen P. King wrote:

 On 2/10/2012 7:25 AM, Quentin Anciaux wrote:



 2012/2/10 Craig Weinberg whatsons...@gmail.com

 On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com wrote:
  2012/2/9 Craig Weinberg whatsons...@gmail.com
 
   On Feb 9, 9:49 am, Quentin Anciaux allco...@gmail.com wrote:
2012/2/9 Craig Weinberg whatsons...@gmail.com
 
   How does a gear or lever have an opinion?
 
  The problems with gears and levers is dumbness.
 
 Does putting a billion gears and levers together in an arrangement
 make them less dumb? Does it start having opinions at some point?
 
Does putting a billions neurons together in an arrangement make them
less
dumb ? Does it start having opinions at some point ?
 
   No, because neurons are living organisms in the first place, not
   gears.
 
  At which point does it start having an opinions ?

 At every point when it is alive.


 That's not true, does a single neuron has an opinion ? two ? a thousand ?


 We may not call them opinions


 Don't switch subject.


 because
 we use that word to refer to an entire human being's experience, but
 the point is that being a living cell makes it capable of having
 different capacities than it does as a dead cell.


 Yes and so what ? a dead cell *does not* behave like a living cell, that's
 enough.


 When it is dead,
 there is no biological sense going on, only chemical detection-
 reaction, which is time reversible. Biological sense isn't time
 reversible.

  Why simulated neurons
  couldn't have opinions at that same point ? Vitalism ?

 No, because there is no such thing as absolute simulation,


 There is no need for an absolute simulation... what do you mean by
 absolute ?



 there is
 only imitation. Simulation is an imitation


 no, simulation is not imitation.


 designed to invite us to
 mistake it for genuine - which is adequate for things we don't care
 about much, but awareness cannot be a mistake. It is the absolute
 primary orientation, so it cannot ever be substituted. If you make
 synthetic neurons which are very close to natural neurons on every
 level, then you have a better chance of coming close enough that the
 resulting organism is very similar to the original. A simulation which
 is not made of something that forms a cell by itself (an actual cell,
 not a virtual sculpture of a cell) probably has no possibility of
 graduating from time reversible detection-reaction to other categories
 of sense, feeling, awareness, perception, and consciousness, just as a
 CGI picture


 A CGI picture *is a picture* not a simulation.


 of a neuron has no chance of producing milliliters of
 actual serotonin, acetylcholine, glutamate,etc.


 Is it needed for consciousness ? why ?



 Craig

 Hi,

     How would your reasoning work for a virus? Is it alive? I think that
 the notion of being alive is not a property of the parts but of the whole.


 Which is the very basic idea sustaining comp. But Craig seems to defend the
 opposite idea. He believes that life, sense, and consciousness must be
 present in the part to sum up in the whole. A mechanist will insist that it
 is the property of the whole which is responsible for the higher order
 aptitude, like being able to play chess, or having a private experience.


 Hi Bruno,

     No. Craig can be considered to be exploring the implications of
 Chalmer's claim that consciousness is a fundamental property of the
 physical, like mass, spin and charge, i.e. it is not emergent from matter.
 His concept of sense is not much different from your 1p or the content of
 a simulation.



 Yet, the case of living and conscious are not entirely equivalent, and
 should be treated differently. The definition of life seems to me
 conventional, but being conscious is everything but conventional.


     We agree on that! Living does seem to be 3p definable while
 conscious is only 1p definable.

 Onward!

 Stephen

Re: The free will function

2012-02-10 Thread Stephen P. King

On 2/10/2012 9:24 AM, David Nyman wrote:

On 10 February 2012 14:08, Stephen P. Kingstephe...@charter.net  wrote:


No. Craig can be considered to be exploring the implications of
Chalmer's claim that consciousness is a fundamental property of the
physical, like mass, spin and charge, i.e. it is not emergent from matter.
His concept of sense is not much different from your 1p or the content of
a simulation.

I disagree with this assessment, I think.  ISTM that equating
consciousness with other physical properties inevitably puts one in
the position of having to build up composite entities from the
properties of their components - hence the notorious grain and
binding problems.  The theology of comp, on the other hand, seems
to imply that at some ultimate level consciousness is a symmetric
unity, but that this symmetry is broken, by the internal logic of
comp, into an infinity of views.  Of course, this latter idea can only
make sense in terms of 1p; from the 3p perspective, all that exists is
computation.

David


Hi David,

I don't disagree with your remark but you are addressing a 
different but related issue from Craig's. The idea of Chalmer's claim is 
that consciousness is not an emergent property, like temperature for 
example, but this is not in principle incompatible with the idea that 
at some ultimate level consciousness is a symmetric unity, but that 
this symmetry is broken, by the internal logic of comp, into an infinity 
of views except that at the level of symmetric unity consciousness 
per se vanishes as the distinctions of and between the infinity of 
views (those are the 1p!) disappears. This is the idea of neutrality 
that I have been discussing, as in neutral monism. The idea of vacuum 
gauge symmetry as it is used in physics is analogous. There was a fellow 
that published a paper a similar idea to this and chatted with us for a 
bit early last year, if I recall correctly. Russell Standish had some 
interesting comments on this.
My difficulty is that at the level of the unbroken symmetry we have 
to be careful that we do not consider implications that are only 
meaningful in the broken or fragmented perspective.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Craig Weinberg
On Feb 10, 7:25 am, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/10 Craig Weinberg whatsons...@gmail.com


How does a gear or lever have an opinion?

   The problems with gears and levers is dumbness.

  Does putting a billion gears and levers together in an arrangement
  make them less dumb? Does it start having opinions at some point?

 Does putting a billions neurons together in an arrangement make them
  less
 dumb ? Does it start having opinions at some point ?

No, because neurons are living organisms in the first place, not
gears.

   At which point does it start having an opinions ?

  At every point when it is alive.

 That's not true, does a single neuron has an opinion ? two ? a thousand ?

You asked me a question, I answered it, and now you claim that 'it's
not true', then you go on asking the same question again. On what do
you base your accusation?


  We may not call them opinions

 Don't switch subject.

I'm not in any way switching the subject. I'm clarifying that the
question relies on a straw man of consciousness which reduces a
complex human subjective phenomenon like 'opinions' to a binary
silhouette. Do cats have opinions? Do chimpanzees? At what point do
hominids begin to have opinions? When do they begin to have
personality? When do humans become human? All of these are red
herrings because they project an objective function on a subjective
understanding.

The point of multisense realism is to show how our default
epistemologies are rooted in our own frame of reference so that there
is no objective point where a person becomes a non-person through
injury or deficiency, or a neuron has a human feeling by itself. These
questions make the wrong assumptions from the start.

What we do know is that human opinions are associated with one thing
only - living human brains. We know that living human brains are only
made of living neurons. We have not yet found anything that we can do
to inorganic molecules will turn them into living neurons. This means
that we have no reason to presume that an inorganic non-cell can ever
be expected to do what cells do, any more than we can expect ammonia
to do what milk does.


  because
  we use that word to refer to an entire human being's experience, but
  the point is that being a living cell makes it capable of having
  different capacities than it does as a dead cell.

 Yes and so what ? a dead cell *does not* behave like a living cell, that's
 enough.

How do you know? What makes you think that things can be defined only
by their behaviors? A person can behave like a brick wall, does that
make it enough to make them a brick wall?


  When it is dead,
  there is no biological sense going on, only chemical detection-
  reaction, which is time reversible. Biological sense isn't time
  reversible.

   Why simulated neurons
   couldn't have opinions at that same point ? Vitalism ?

  No, because there is no such thing as absolute simulation,

 There is no need for an absolute simulation... what do you mean by
 absolute ?

A copy which simulates the original in every way.


  there is
  only imitation. Simulation is an imitation

 no, simulation is not imitation.

Please explain.


  designed to invite us to
  mistake it for genuine - which is adequate for things we don't care
  about much, but awareness cannot be a mistake. It is the absolute
  primary orientation, so it cannot ever be substituted. If you make
  synthetic neurons which are very close to natural neurons on every
  level, then you have a better chance of coming close enough that the
  resulting organism is very similar to the original. A simulation which
  is not made of something that forms a cell by itself (an actual cell,
  not a virtual sculpture of a cell) probably has no possibility of
  graduating from time reversible detection-reaction to other categories
  of sense, feeling, awareness, perception, and consciousness, just as a
  CGI picture

 A CGI picture *is a picture* not a simulation.

Neither is an AGI application. That's what I'm saying. Simulation is a
casual notion that doesn't stand up to further inspection.


  of a neuron has no chance of producing milliliters of
  actual serotonin, acetylcholine, glutamate,etc.

 Is it needed for consciousness ? why ?

It's needed for human consciousness I think because consciousness is
an event, and those molecules are like the BIOS of the whole human OS.
Not the molecules themselves, but the band of experiences/qualia which
those molecules can tune into. Think of those experiences as the
ancestors of our contemporary whole-brain scale experiences.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 

Re: The free will function

2012-02-10 Thread Quentin Anciaux
2012/2/10 Craig Weinberg whatsons...@gmail.com

 On Feb 10, 7:25 am, Quentin Anciaux allco...@gmail.com wrote:
  2012/2/10 Craig Weinberg whatsons...@gmail.com

 
 How does a gear or lever have an opinion?
 
The problems with gears and levers is dumbness.
 
   Does putting a billion gears and levers together in an
 arrangement
   make them less dumb? Does it start having opinions at some
 point?
 
  Does putting a billions neurons together in an arrangement make
 them
   less
  dumb ? Does it start having opinions at some point ?
 
 No, because neurons are living organisms in the first place, not
 gears.
 
At which point does it start having an opinions ?
 
   At every point when it is alive.
 
  That's not true, does a single neuron has an opinion ? two ? a thousand ?

 You asked me a question, I answered it, and now you claim that 'it's
 not true', then you go on asking the same question again. On what do
 you base your accusation?


On the fact that a single neuron has no opinions whatsoever... You asked
how many gears was required... the straw man is there.



 
   We may not call them opinions
 
  Don't switch subject.

 I'm not in any way switching the subject.


you are


 I'm clarifying that the
 question relies on a straw man of consciousness


You did begin with the straw man...


 which reduces a
 complex human subjective phenomenon like 'opinions' to a binary
 silhouette. Do cats have opinions? Do chimpanzees? At what point do
 hominids begin to have opinions? When do they begin to have
 personality? When do humans become human? All of these are red
 herrings because they project an objective function on a subjective
 understanding.


Do a complex program with deep self reference computation connected to the
workd can be conscious like a human is ? You answer no, without giving any
reason for that. So it's just bullshit... I'm awaiting your proof that it
is not possible... not your usual way to slip out the subject.



 The point of multisense realism is to show how our default
 epistemologies are rooted in our own frame of reference so that there
 is no objective point where a person becomes a non-person through
 injury or deficiency, or a neuron has a human feeling by itself. These
 questions make the wrong assumptions from the start.

 What we do know is that human opinions are associated with one thing
 only - living human brains. We know that living human brains are only
 made of living neurons. We have not yet found anything that we can do
 to inorganic molecules will turn them into living neurons. This means
 that we have no reason to presume that an inorganic non-cell can ever
 be expected to do what cells do, any more than we can expect ammonia
 to do what milk does.

 
   because
   we use that word to refer to an entire human being's experience, but
   the point is that being a living cell makes it capable of having
   different capacities than it does as a dead cell.
 
  Yes and so what ? a dead cell *does not* behave like a living cell,
 that's
  enough.

 How do you know? What makes you think that things can be defined only
 by their behaviors? A person can behave like a brick wall, does that
 make it enough to make them a brick wall?

 
   When it is dead,
   there is no biological sense going on, only chemical detection-
   reaction, which is time reversible. Biological sense isn't time
   reversible.
 
Why simulated neurons
couldn't have opinions at that same point ? Vitalism ?
 
   No, because there is no such thing as absolute simulation,
 
  There is no need for an absolute simulation... what do you mean by
  absolute ?

 A copy which simulates the original in every way.

 
   there is
   only imitation. Simulation is an imitation
 
  no, simulation is not imitation.

 Please explain.

 
   designed to invite us to
   mistake it for genuine - which is adequate for things we don't care
   about much, but awareness cannot be a mistake. It is the absolute
   primary orientation, so it cannot ever be substituted. If you make
   synthetic neurons which are very close to natural neurons on every
   level, then you have a better chance of coming close enough that the
   resulting organism is very similar to the original. A simulation which
   is not made of something that forms a cell by itself (an actual cell,
   not a virtual sculpture of a cell) probably has no possibility of
   graduating from time reversible detection-reaction to other categories
   of sense, feeling, awareness, perception, and consciousness, just as a
   CGI picture
 
  A CGI picture *is a picture* not a simulation.

 Neither is an AGI application. That's what I'm saying. Simulation is a
 casual notion that doesn't stand up to further inspection.

 
   of a neuron has no chance of producing milliliters of
   actual serotonin, acetylcholine, glutamate,etc.
 
  Is it needed for consciousness ? why ?

 It's needed for human consciousness I think because 

Re: The free will function

2012-02-10 Thread Craig Weinberg
On Feb 10, 8:17 am, Stephen P. King stephe...@charter.net wrote:

      Hi,

          How would your reasoning work for a virus? Is it alive? I
      think that the notion of being alive is not a property of the
      parts but of the whole.

  Is it a question directed to craig or to me ?

 Hi,

      It is directed at both of you. :-)

 Onward!

Hi Stephen,

Right, not the parts but the whole, but also who is looking at the
whole. It's the same question as substitution level. Where does yellow
green end and green yellow begin? It depends what color it's sitting
next to and how sensitive someone's vision is to color. It's all
indexical and relative. A virus is more associated with life than a
glucose molecule alone, but less living than a living cell.

Here's an extended metaphor: Medieval walled city with a castle in the
middle. Peasants work the land for the lord in the castle and there
are invasions of rogue peasants from other territories who steal, beg,
etc. To pick out these virus peasants and ask 'are they feudal' is
framing it objectively when it can only be subjective. There is no
'simply is', only many 'seems like' interpretations. The threat of
invasion strengthens the benefit of the Lord's protection, which in
turn encourages the loyalty and productivity of the peasants. In that
way, the rogue bandits are indeed a part of Feudalism.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Craig Weinberg
On Feb 10, 9:08 am, Stephen P. King stephe...@charter.net wrote:

      No. Craig can be considered to be exploring the implications of
 Chalmer's claim that consciousness is a fundamental property of the
 physical, like mass, spin and charge, i.e. it is not emergent from
 matter. His concept of sense is not much different from your 1p or the
 content of a simulation.

Right. I pick up where Chalmers leaves off:

1. It is not a fundamental property of the physical exactly but
rather, the physical and the experiential are the fundamental
modalities of 'sense'.
2. The modalities are necessarily symmetric but anomalous, so that
mind is not the opposite of brain directly, but that both mind and
brain are opposite modalities of sense
3. Sense is anomalous symmetry itself: sameness on one level,
difference on another, and a third invariance (self) that straddles
the 'levels'.

There are emergent properties in matter and emergent properties in
awareness, but they develop out of their own momentum. When we tell a
story, the plot of the story builds the experience, not the ambivalent
activities of our neurotransmitters. Changes on the neurotransmitter
level can inspire certain kinds of thoughts or stories too, and the
literal and figurative influences can play off of each other too, but
the neither physical nor experiential supervene fully and completely
on the other.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Boolean Algebra Conjecture

2012-02-10 Thread meekerdb

On 2/10/2012 5:26 AM, Stephen P. King wrote:

Now we can
quibble about this and discuss how in Special Relativistic situations we
can indeed have situations there X caused Y is true for some frames of
reference and Y caused X for some other frame of reference, but this
dilemma can be resolved by considering the effect of a finite speed of
light whose speed is an invariant for all observers, e.g. general
covariance. 


This wrong.  SR shows that it can be the case X is before Y in one frame and Y is 
before X in a different frame moving relative to the first frame.  This means X and Y are 
spacelike separated.  But you can't have X caused Y in one frame and Y caused X in 
another.  X caused Y implies that Y is in the timelike or null future of X; a relation 
that preserved in all Lorentz frames.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Boolean Algebra Conjecture

2012-02-10 Thread Stephen P. King

On 2/10/2012 12:51 PM, meekerdb wrote:

On 2/10/2012 5:26 AM, Stephen P. King wrote:

Now we can
quibble about this and discuss how in Special Relativistic situations we
can indeed have situations there X caused Y is true for some frames of
reference and Y caused X for some other frame of reference, but this
dilemma can be resolved by considering the effect of a finite speed of
light whose speed is an invariant for all observers, e.g. general
covariance. 


This wrong.  SR shows that it can be the case X is before Y in one 
frame and Y is before X in a different frame moving relative to the 
first frame.  This means X and Y are spacelike separated.  But you 
can't have X caused Y in one frame and Y caused X in another.  X 
caused Y implies that Y is in the timelike or null future of X; a 
relation that preserved in all Lorentz frames.


Brent

Hi Brent,

Yes, you are correct. I was thinking of the scenario that Penrose 
discussed in The Emperor's New Mind about the fleet of ships in Andromeda.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Information: a basic physical quantity or rather emergence/supervenience phenomenon

2012-02-10 Thread Evgenii Rudnyi

On 09.02.2012 00:44 1Z said the following:





On Feb 7, 7:04 pm, Evgenii Rudnyiuse...@rudnyi.ru  wrote:


Let us take a closed vessel with oxygen and hydrogen at room
temperature. Then we open a platinum catalyst in the vessel and
the reaction starts. Will then the information in the vessel be
conserved?

Evgenii


What's the difference between  in-principle, and for-all-practical
purposes.?



What is the relationship between your question and mine?

Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Information: a basic physical quantity or rather emergence/supervenience phenomenon

2012-02-10 Thread Evgenii Rudnyi

On 08.02.2012 22:44 Russell Standish said the following:

On Wed, Feb 08, 2012 at 08:32:16PM +0100, Evgenii Rudnyi wrote:

...



What I observe personally is that there is information in
informatics and information in physics (if we say that the
thermodynamic entropy is the information). If you would agree,
that these two informations are different, it would be fine with
me, I am flexible with definitions.

Yet, if I understand you correctly you mean that the information
in informatics and the thermodynamic entropy are the same. This
puzzles me as I believe that the same physical values should have
the same numerical values. Hence my wish to understand what you
mean. Unfortunately you do not want to disclose it, you do not want
to apply your theory to examples that I present.

Evgenii


Given the above paragraph, I would say we're closer than you've
previously intimated.

Of course there is information in informatics, and there is
information in physics, just as there's information in biology and
so on. These are all the same concept (logarithm of a probability).
Numerically, they differ, because the context differs in each
situation.

Entropy is related in a very simple way to information. S=S_max - I.
So provided an S_max exists (which it will any finite system), so
does entropy. In the example of a hard drive, the informatics S_max
is the capacity of the drive eg 100GB for a 100GB drive. If you
store 10GB of data on it, the entropy of the drive is 90GB. That's
it.

Just as information is context dependent, then so must entropy.

Thermodynamics is just one use (one context) of entropy and
information. Usually, the context is one of homogenous bulk
materials. If you decide to account for surface effects, you change
the context, and entropy should change accordingly.


Let me ask you the same question that I have recently asked Brent. Could 
you please tell me, the thermodynamic entropy of what is discussed in 
Jason's example below?


Evgenii


On 03.02.2012 00:14 Jason Resch said the following:
...
 Evgenii,

 Sure, I could give a few examples as this somewhat intersects with my
 line of work.

 The NIST 800-90 recommendation (
 http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf )
 for random number generators is a document for engineers implementing
 secure pseudo-random number generators.  An example of where it is
 important is when considering entropy sources for seeding a random
 number generator.  If you use something completely random, like a
 fair coin toss, each toss provides 1 bit of entropy.  The formula is
 -log2(predictability).  With a coin flip, you have at best a .5
 chance of correctly guessing it, and -log2(.5) = 1.  If you used a
 die roll, then each die roll would provide -log2(1/6) = 2.58 bits of
 entropy.  The ability to measure unpredictability is necessary to
 ensure, for example, that a cryptographic key is at least as
 difficult to predict the random inputs that went into generating it
 as it would be to brute force the key.

 In addition to security, entropy is also an important concept in the
 field of data compression.  The amount of entropy in a given bit
 string represents the theoretical minimum number of bits it takes to
 represent the information.  If 100 bits contain 100 bits of entropy,
 then there is no compression algorithm that can represent those 100
 bits with fewer than 100 bits.  However, if a 100 bit string contains
 only 50 bits of entropy, you could compress it to 50 bits.  For
 example, let's say you had 100 coin flips from an unfair coin.  This
 unfair coin comes up heads 90% of the time.  Each flip represents
 -log2(.9) = 0.152 bits of entropy.  Thus, a sequence of 100 coin
 flips with this biased coin could be represent with 16 bits.  There
 is only 15.2 bits of information / entropy contained in that 100 bit
 long sequence.

 Jason



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-10 Thread John Clark
On Thu, Feb 9, 2012 Craig Weinberg whatsons...@gmail.com wrote:
 The rule book is the memory.

Yes but the rule book not only contains a astronomically large database it
also contains a super ingenious artificial intelligence program; without
those things the little man is like a naked microprocessor sitting on a
storage shelf, its not a brain and its not a computer and its not doing one
damn thing.


 The contents of memory is dumb too - as dumb as player piano rolls.


That's pretty dumb. but the synapses of the brain are just as dumb and the
atoms they, and computers and everything else, are made of are even
dumber.

 The two together only seem intelligent to Chinese speakers outside the
 door


Only?! Einstein only seemed intelligent to scientifically literate speakers
in the outside world. It seems that, as you use the term, seeming
intelligent is as good as being intelligent. In fact it seems to me that
believing intelligent actions are not a sign of intelligence is not very
intelligent.

 A conversation that lasts a few hours could probably be generated from a
 standard Chinese phrase book, especially if equipped with some useful
 evasive answers (a la ELIZA).


You bring up that stupid 40 year old program again? Yes ELIZA displayed
little if any intelligence but that program is 40 years old! Do try to keep
up. And if you are really confident in your ideas push the thought
experiment to the limit and let the Chinese Room produce brilliant answers
to complex questions, if it just churns out ELIZA style evasive crap that
proves nothing because we both agree that's not very intelligent.

 The size isn't the point though.


I rather think it is. A book larger than the observable universe and a
program more brilliant than any written, yet you insist that if understand
is anywhere in that room it must be in the by far least remarkable part of
it, the silly little man.  And remember the consciousness that room
produces would not be like the consciousness you or I have, if would take
that room many billions of years to generate as much consciousness as you
do in one second.

 Speed is a red herring too.


No it is not and I will tell you exactly why as soon as the sun burns out
and collapses into a white dwarf. Speed isn't a issue so you have to
concede that I won that point.

  if it makes sense for a room to be conscious, then it makes sense that
 anything and everything can be conscious


Yes, providing the thing in question behaves intelligently.  We only think
our fellow humans are conscious when they behave intelligently and that's
the only reason we DON'T think they're conscious when they're sleeping or
dead; all I ask is that you play by the same rules when dealing with
computers or Chinese Rooms.

 However Searle does not expect us to think it odd that 3 pounds of grey
 goo in a bone vat can be conscious


 Because unlike you, he [Searl] is not presuming the neuron doctrine. I
 think his position is that consciousness cannot solely because of the
 material functioning of the brain and it must be something else.


And yet if you change the way the brain functions, through drugs or surgery
or electrical stimulation or a bullet to the head, the conscious experience
changes too.  And if the brain can make use of this free floating glowing
bullshit of yours what reason is there to believe that computers can't also
do so? I've asked this question before and the best you could come up with
is that computers aren't squishy and don't smell bad so they can't be
conscious. I don't find that argument compelling.

 We know the brain relates directly to consciousness, but we don't know
 for sure how.


If you don't know how the brain produces consciousness then how in the
world can you be so certain a computer can't do it too, especially if the
computer is as intelligent or even more intelligent than the brain?

 We can make a distinction between the temporary disposition of the brain
 and it's more permanent structureor organization.


A 44 magnum bullet in the brain would cause a change in brain organization
and would seem to be rather permanent. I believe such a thing would also
cause a rather significant change in consciousness. Do you disagree?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Information: a basic physical quantity or rather emergence/supervenience phenomenon

2012-02-10 Thread Evgenii Rudnyi

On 09.02.2012 07:49 meekerdb said the following:

...



There's an interesting paper by Bennett that I ran across, which
discusses the relation of Shannon entropy, thermodynamic entropy, and
 algorithmic entropy in the context of DNA and RNA replication:

http://qi.ethz.ch/edu/qisemFS10/papers/81_Bennett_Thermodynamics_of_computation.pdf


Thank you for the link. I like the first sentence

Computers may be thought of as engines for transforming free energy 
into waste heat and mathematical work.


I am not sure though if this is more as a metaphor. I will read the 
paper, the abstract looks nice.


I believe that there was a chapter on reversible computation in

Nanoelectronics and Information Technology, ed Rainer Waser

I guess, reversible computation is kind of a strange attractor for 
engineers.


As for DNA, RNA, and proteins, I have recently read

Barbieri, M. (2007). Is the cell a semiotic system? In: Introduction to 
Biosemiotics: The New Biological Synthesis. Eds.: M. Barbieri, Springer: 
179-208.


If the author is right, it well might be that the language was developed 
even before the consciousness. By the way, the paper is written very 
well and I have to think it over.


A related discussion

http://embryogenesisexplained.com/2012/02/is-the-cell-a-semiotic-system.html

Evgenii






Brent



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread John Clark
On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com wrote:

 Why simulated neurons couldn't have opinions at that same point ?
 Vitalism ?


Yes, the only way Craig could be right is if vitalism is true, and its
pretty sad that well into the 21'st century some still believe in that
crap. What's next, bring back medieval alchemy?


 On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 there is no such thing as absolute simulation, there is only imitation.


So not only is a computer incapable of performing arithmetic you can't do
it either, all you can do is a pale imitation of arithmetic, so neither we
nor a computer can ever know how much 2 +2 is. And I've asked you before,
when you reply to this please don't send a imitation Email to the list,
nobody wants to see a mere simulation, send your REAL ORIGINAL EMAIL!
Undoubtedly the only reason you haven't convinced everybody of your
brilliance is that we've only seen copies of you messages, we want the
originals.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Craig Weinberg
On Feb 10, 4:16 pm, John Clark johnkcl...@gmail.com wrote:
 On Feb 10, 4:06 am, Quentin Anciaux allco...@gmail.com wrote:

  Why simulated neurons couldn't have opinions at that same point ?
  Vitalism ?

 Yes, the only way Craig could be right is if vitalism is true, and its
 pretty sad that well into the 21'st century some still believe in that
 crap. What's next, bring back medieval alchemy?

Apparently what's next is imagining that machines are people and
people are machines. We'll be imprisoning software soon I suppose.


  On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  there is no such thing as absolute simulation, there is only imitation.

 So not only is a computer incapable of performing arithmetic you can't do
 it either, all you can do is a pale imitation of arithmetic, so neither we
 nor a computer can ever know how much 2 +2 is.

What a computer does is arithmetic to us, but to the computer it's
billions of separate electronic or mechanical events that signify
nothing to it. It's no different from saying that CD player 'performs
music'. We hear music, but the CD player hears nothing at all.
Obviously.

 And I've asked you before,
 when you reply to this please don't send a imitation Email to the list,
 nobody wants to see a mere simulation, send your REAL ORIGINAL EMAIL!

The original email is my subjective experience of composing it,
therefore it cannot be sent. What can be sent is neither a simulation
nor an imitation but rather a completely separate semiotic text which
can be used by human beings to communicate with other human beings who
share a common language. The email server does not share that common
language and cannot participate in the communication, even though it
provides the communication channel.

 Undoubtedly the only reason you haven't convinced everybody of your
 brilliance is that we've only seen copies of you messages, we want the
 originals.

I'm not trying to convince anyone that I'm brilliant, I'm explaining
why the popular ideas and conventional wisdom of the moment are
misguided.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The free will function

2012-02-10 Thread Stephen P. King

On 2/10/2012 8:24 PM, Craig Weinberg wrote:

On Feb 10, 4:16 pm, John Clarkjohnkcl...@gmail.com  wrote:

On Feb 10, 4:06 am, Quentin Anciauxallco...@gmail.com  wrote:


Why simulated neurons couldn't have opinions at that same point ?
Vitalism ?

Yes, the only way Craig could be right is if vitalism is true, and its
pretty sad that well into the 21'st century some still believe in that
crap. What's next, bring back medieval alchemy?

Apparently what's next is imagining that machines are people and
people are machines. We'll be imprisoning software soon I suppose.


  On Fri, Feb 10, 2012  Craig Weinbergwhatsons...@gmail.com  wrote:


there is no such thing as absolute simulation, there is only imitation.

So not only is a computer incapable of performing arithmetic you can't do
it either, all you can do is a pale imitation of arithmetic, so neither we
nor a computer can ever know how much 2 +2 is.

What a computer does is arithmetic to us, but to the computer it's
billions of separate electronic or mechanical events that signify
nothing to it. It's no different from saying that CD player 'performs
music'. We hear music, but the CD player hears nothing at all.
Obviously.


And I've asked you before,
when you reply to this please don't send a imitation Email to the list,
nobody wants to see a mere simulation, send your REAL ORIGINAL EMAIL!

The original email is my subjective experience of composing it,
therefore it cannot be sent. What can be sent is neither a simulation
nor an imitation but rather a completely separate semiotic text which
can be used by human beings to communicate with other human beings who
share a common language. The email server does not share that common
language and cannot participate in the communication, even though it
provides the communication channel.


Undoubtedly the only reason you haven't convinced everybody of your
brilliance is that we've only seen copies of you messages, we want the
originals.

I'm not trying to convince anyone that I'm brilliant, I'm explaining
why the popular ideas and conventional wisdom of the moment are
misguided.

Craig


Free your mind!

Just sayin'...

Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



1p 3p comparison

2012-02-10 Thread Craig Weinberg
Dennett's Comp:
Human 1p = 3p(3p(3p)) - Subjectivity is an illusion
Machine 1p = 3p(3p(3p)) - Subjectivity is not considered formally

My view:
Human 1p = (1p(1p(1p))) - Subjectivity a fundamental sense modality
which is qualitatively enriched in humans through multiple organic
nestings.
Machine 1p = (3p(3p(1p))) - Machine subjectivity is limited to
hardware level sense modalities, which can be used to imitate human 3p
quantitatively but cannot be enriched qualitatively to human 1p.

Bruno:
Machine or human 1p = (1p(f(x)) - Subjectivity arises as a result of
the 1p set of functional consequences of specific arithmetic truths,
which (I think) are neither object, subject, or sense, but Platonic
universal numbers.

Is that close?



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Information: a basic physical quantity or rather emergence/supervenience phenomenon

2012-02-10 Thread Russell Standish
On Fri, Feb 10, 2012 at 09:39:50PM +0100, Evgenii Rudnyi wrote:
 
 Let me ask you the same question that I have recently asked Brent.
 Could you please tell me, the thermodynamic entropy of what is
 discussed in Jason's example below?
 
 Evgenii
 

If you're asking what is the conversion constant between bits and J/K,
the answer is k_B log(2) / log(10).

I'm not sure what else to tell you...

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-10 Thread Craig Weinberg
On Feb 10, 3:52 pm, John Clark johnkcl...@gmail.com wrote:
 On Thu, Feb 9, 2012 Craig Weinberg whatsons...@gmail.com wrote:

  The rule book is the memory.

 Yes but the rule book not only contains a astronomically large database it
 also contains a super ingenious artificial intelligence program; without
 those things the little man is like a naked microprocessor sitting on a
 storage shelf, its not a brain and its not a computer and its not doing one
 damn thing.

I think you are radically overestimating the size of the book and the
importance of the size to the experiment. ELIZA was about 20Kb.
http://www.jesperjuul.net/eliza/

If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book. The King James Bible can be downloaded here
http://www.biblepath.com/bible_download.html at 14.33Mb. There is no
time limit specified so we have no way of knowing how long it would
take for a book this size to fail the Turing Test.

It might be more useful to use more of a pharmaceutical model, like
LD50 or LD100; how long of a conversation do you have to have before
50% of the native speakers fail the system. Is the Turing Test an LD00
test with unbounded duration? No native speaker can ever tell the
difference no matter how long they converse? This is clearly
impossible. It's context dependent and subjective. I only assume that
everyone here is human because I have no reason to doubt that, but in
a testing situation, I would not be confident that everyone here is
human judging only from responses.


  The contents of memory is dumb too - as dumb as player piano rolls.

 That's pretty dumb. but the synapses of the brain are just as dumb and the
 atoms they, and computers and everything else, are made of are even
 dumber.

Player piano rolls aren't living organisms that create and repair vast
organic communication networks. Computers don't do anything by
themselves, they have to be carefully programed and maintained by
people and they have to have human users to make sense of any of their
output. Neurons require no external physical agents to program or use
them.


  The two together only seem intelligent to Chinese speakers outside the
  door

 Only?! Einstein only seemed intelligent to scientifically literate speakers
 in the outside world.

No, he was aware of his own intelligence too. I think you're grasping
at straws.

 It seems that, as you use the term, seeming
 intelligent is as good as being intelligent.

So if I imitate Arnold Schwarzenegger on the phone, then that's as
good as me being Schwarzenegger.

 In fact it seems to me that
 believing intelligent actions are not a sign of intelligence is not very
 intelligent.

I understand that you think of it that way, and I think that is a
moronic belief, but I don't think that makes you a moron. It all comes
down to thinking in terms of an arbitrary formalism of language rather
and working backward to reality rather than working from concrete
realism and using language to understand it. If you start out defining
intelligence as an abstract function and category of behaviors rather
than quality of consciousness which entails the capacity for behaviors
and functions, then you end up proving your own assumptions with
circular reasoning.


  A conversation that lasts a few hours could probably be generated from a
  standard Chinese phrase book, especially if equipped with some useful
  evasive answers (a la ELIZA).

 You bring up that stupid 40 year old program again? Yes ELIZA displayed
 little if any intelligence but that program is 40 years old! Do try to keep
 up.

You keep up. ELIZA is still being updated as of 2007:
http://webscripts.softpedia.com/script/Programming-Methods-and-Algorithms-Python/Artificial-Intelligence-Chatterbot-Eliza-15909.html

I use ELIZA as an example because you can clearly see that it is not
intelligent and you can clearly see that it could superficially seem
intelligent. It becomes more difficult to be as sure what is going on
when the program is more sophisticated because it is a more convincing
fake. The ELIZA example is perfect because it exposes the fundamental
mechanism by which trivial intelligence can be mistaken for the
potential for understanding.

 And if you are really confident in your ideas push the thought
 experiment to the limit and let the Chinese Room produce brilliant answers
 to complex questions, if it just churns out ELIZA style evasive crap that
 proves nothing because we both agree that's not very intelligent.

Ok, make it a million times the size of ELIZA. A set of 1,000 books. I
think that would pass an LD50 Turing Test of a five hour conversation,
don't you?


  The size isn't the point though.

 I rather think it is. A book larger than the observable universe and a
 program more brilliant than any written,

where are you getting that from?

 yet you insist that if understand
 is anywhere in that room it must be in the by far least remarkable part of
 it, the silly little man.

That's the 

Re: Truth values as dynamics? (was: Ontological Problems of COMP)

2012-02-10 Thread Stephen P. King

On 2/9/2012 3:40 PM, acw wrote:

I think the idea of Platonia is closer to the fact that if a sentence
has a truth-value, it will have that truth value, regardless if you
know it or not.


Sure, but it is not just you to whom a given sentence may have the 
same

exact truth value. This is like Einstein arguing with Bohr with the
quip: The moon is still there when I do not see it. My reply to
Einstein would be: Sir, you are not the only observer of the moon! We
have to look at the situation from the point of view of many observers
or, in this case, truth detectors, that can interact and communicate
consistently with each other. We cannot think is just solipsistic 
terms.



Sure, but what if nobody is looking at the moon? Or instead of moon,
pick something even less likely to be observed. To put it differently,
Riemann hypothesis or Goldbach's conjecture truth-value should not
depend on the observers thinking of it - they may eventually discover
it, and such a discovery would depend on many computational
consequences, of which the observers may not be aware of yet, but
doesn't mean that those consequences don't exist - when the
computation is locally performed, it will always give the same result
which could be said to exist timelessly.

[SPK]
My point is that any one or thing that could be affected by the truth
value of the moon has X, Y, Z properties will, in effect, be an
observer of the moon since it is has a definite set of properties as
knowledge. The key here is causal efficacy, if a different state of
affairs would result if some part of the world is changed then the
conditions of that part of the world are observed. The same thing
holds for the truth value Riemann hypothesis or Goldbach's conjecture,
since there would be different worlds for each of their truth values. My
point is that while the truth value or reality of the moon does not
depend on the observation by any _one_ observer, it does depend for its
definiteness on the possibility that it could be observed by some
observer. It is the possibility that makes the difference. A object that
cannot be observer by any means, including these arcane versions that I
just laid out, cannot be said to have a definite set of properties or
truth value, to say the opposite is equivalent to making a truth claim
about a mathematical object for whom no set of equations or
representation can be made.

You're conjecturing here that there were worlds where Riemann 
hypothesis or Goldbach's conjecture have different truth values. I 
don't think arithmetical truths which happen to have proofs have 
indexical truth values, this is due to CTT. Although most physical 
truths are indexical (or depend on the axioms chosen).
We could limit ourselves to decidable arithmetical truths only, but 
you'd bump into the problem of consistency of arithmetic or the 
halting problem. It makes no sense to me that a machine which is 
defined to either halt or not halt would not do either. We might not 
know if a machine halts or not, but that doesn't mean that if when ran 
in any possible world it would behave differently. Arithmetical truth 
should be the same in all possible worlds. An observer can find out a 
truth value, but it cannot alter it, unless it is an indexical 
(context-dependent truth, such as what time it is now or where do 
you live).
Of course, we cannot talk about the truth value of undefined stuff, 
that would be non-sense. However, we can talk about the truth value of 
what cannot be observed - this machine never halts is only true if 
no observation of the machine halting can ever be made, in virtue of 
how the machine is defined, yet someone could use various 
meta-reasoning to reach the conclusion that the machine will never 
halt (consistency of arithmetic is very much similar to the halting 
problem - it's only consistent if a machine which enumerates proofs 
never finds a proof of 0=1; of course, this is not provable within 
arithmetic itself, thus it's a provably unprovable statement for any 
consistent machine, thus can only be a matter of theology as Bruno 
calls it). 


 Hi ACW,

I am considering that the truth value is a function of the theory 
with which a proposition is evaluated. In other words, meaningfulness, 
including truth value, is contextual while existence is absolute.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP theology (was: Ontological Problems of COMP)

2012-02-10 Thread Stephen P. King

Hi ACW,

Thank you for the time and effort to write this up!!!

On 2/9/2012 3:40 PM, acw wrote:
Bruno has always said that COMP is a matter of theology (or religion), 
that is, the provably unprovable, and I agree with this. However, 
let's try and see why that is and why someone would take COMP as an 
assumption:


- The main assumption of COMP is that you admit, at some level, a 
digital substitution, and the stronger assumption that if you were to 
implement/run such a Turing-emulable program, it would be conscious 
and you would have a continuation in it. Isn't that a strong 
theological assumption?

[SPK]
Yes, but it is the substitution of one configuration of stuff 
with another such that the functionality (that allows for the 
implementation/running of the Turing-emulable (Turing equivalence!)) 
program to remain invariant. One thing interesting to point out about 
this is that this substitution can be the replacement of completely 
different kinds of stuff, like carbon based stuff with silicon based 
stuff and does not require a continuous physical process of 
transformation in the sense of smoothly morphism the carbon stuff into 
silicon stuff at some primitive level. B/c of this it may seem to bypass 
the usual restrictions of physical laws, but does it really?
What exactly is this physical stuff anyway? If we take a hint 
from the latest ideas in theoretical physics it seems that the stuff 
of the material world is more about properties that remain invariant 
under sets of symmetry transformations and less and less about anything 
like primitive substances. So in a sense, the physical world might be 
considered to be a wide assortment of bundles of invariants therefore it 
seems to me that to test COMP we need to see if those symmetry groups 
and invariants can be derived from some proposed underlying logical 
structure. This is what I am trying to do. I am really not arguing 
against COMP, I am arguing that COMP is incomplete as a theory as it 
does not yet show how the appearance of space, time and conservation 
laws emerges in a way that is invariant and not primitive. I guess I 
have the temerity to play Einstein against Bruno's Bohr. :-) OTOH, I am 
not arguing for any kind of return to naive realism or that the physical 
world is the totality of existence. I do know that I am just a curious 
amateur, so I welcome any critique that might help me learn.



I think it is, but at the same time, it has solid consequences and a 
belief in it can be justified for a number of reasons:
 a) Fading qualia thought experiment, which shows that consciousness 
is utterly fickle if it doesn't follow a principle of functional / 
organizational invariance. Most of our sense data tends to point that 
such a principle makes sense. Avoiding it means consciousness does not 
correspond to brain states and p. zombies.


Certainly! We need a precise explanation for psycho-physical 
parallelism. My tentative explanation is that at our level a form of 
dualism holds. A dualism quite unlike that of Descartes, since instead 
of separate substances, it is proposed that the logical and the 
physical are two distinct aspect of reality that follow on equal yet 
anti-parallel tracks. As Vaughan Pratt explains in his papers, the 
logical processes and the physical processes have dynamics that have 
arrows that point in opposite directions. Schematically and crudely we 
can show a quasi-category theory diagram of this duality:


  X - Y -
 |   |
- A --B -

The vertical lines represent the Stone duality relation and the 
horizontal arrow represent logical entailment and physical causation. 
The chaining (or /residuation/)  rule is X causes Y iff B 
necessitates A, where X and A and duals and Y and B and duals. This 
duality prohibits zombies and disembodied spirits. There is much more to 
this diagram as it does not include the endomorphisms, homeomorphisms 
and other mappings and objects that are involved in the full 
implementation of the /residuation/ rule.
I just found a paper by Martin Wehr 
www.dcs.ed.ac.uk/home/wehr/newpage/Papers/qc.ps.gz that elaborates on 
Pratt's idea and explains /residuation/ better! Here is the abstract:


   Quantum Computing: A new Paradigm and it's Type Theory

  Martin Wehr

   Quantum Computing Seminar, Lehrstuhl Prof. Beth,
   Universitat Karlsruhe, July 1996


To use quantum mechanical behavior for computing has
been proposed by Feynman. Shor gave an algorithm for
the quantum computer which raised a big stream of research.
This was because Shor's algorithm did reduce the yet assumed exponential
complexity of the security relevant factorization problem, to
a quadratic complexity if quantum computed.

  In the paper a short introduction to quantum mechanics can be
found in the appendix. With this material the operation of the
quantum computer, and  the ideas of quantum logic will be explained.
 The focus will be