Re: UD* and consciousness

2012-02-25 Thread Bruno Marchal


On 24 Feb 2012, at 06:20, meekerdb wrote:


On 2/23/2012 6:00 PM, Terren Suydam wrote:
On Thu, Feb 23, 2012 at 7:21 PM, meekerdbmeeke...@verizon.net   
wrote:

On 2/23/2012 2:49 PM, Terren Suydam wrote:

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


There will be legal and ethical questions about how we and  
machines should
treat one another. Just being conscious won't mean much though.   
As Jeremy
Bentham said of animals, It's not whether they can think, it's  
whether they

can suffer.

Brent
That brings up the interesting question of how you could explain  
which

conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call
conscious would be capable of suffering. One way or the other it  
would

seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.

For instance, pain I can make sense of in terms of what it feels like
for a being's structure to become less organized though I'm not  
sure

how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become more organized
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.


I don't think becoming more or less organized has any direct bearing  
on pain or pleasure. Physical pain and pleasure are reactions built- 
in by evolution for survival benefits. If a fire makes you too hot,  
you move away from it, even though it's not disorganizing you.  On  
the other hand, cancer is generally painless in its early stages.   
And psychological suffering can be very bad without any physical  
damage.  I don't think suffering requires consciousness,


?
Suffering is a conscious experience, I would say by definition.





at least not human-like consciousness,


All right then. Humans, I guess, add a strong emotional response to  
suffering, because its self-referential means allow them to interpret  
them as integrity and life threat.





but psychological suffering might require consciousness in the form  
of self-reflection.


Pain, and direct suffering, is only a build-in message, making an  
animal avoiding something threatening its life. This has to be  
unconscious and non voluntary. If not the animal response would not  
been made. But the integrated organism will be conscious of something  
global and easy to remember, so that it can anticipate it in similar  
situation. Basically, I think the difference between low level direct  
pain and high level reflexive emotions might occur at the threshold  
between non Löbianity and Löbianity, which technically is the  
difference between a six length program and an 8 length program. For  
the first pain is a sensation to avoid, here and now; for the later  
pain is a sensation to avoid in general.
This difference might have appeared a very long time ago, with the  
invertebrates like octopi and spider, but perhaps earlier.


This does not solve the question of the quale of pain, but this  
question needs a better understanding of consciousness, and will  
differ in the case we agree that a UMs is already conscious or not. I  
am not yet quite sure about this. I have thought for a long time that  
consciousness begin with Löbianity and is always related with a  
duration sensation, but I have changed my mind on this. I tend to  
think now that all UMs are conscious, and that Löbianity is needed  
only for the higher duration feelings and emotions. For UMs, shit can  
happen, but only LUMs makes it into a long term problem, eventually a  
religious/philosophical one.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-25 Thread Bruno Marchal


On 24 Feb 2012, at 21:51, Terren Suydam wrote:

On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydam terren.suy...@gmail.com 
 wrote:
On Fri, Feb 24, 2012 at 2:27 PM, meekerdb meeke...@verizon.net  
wrote:

On 2/24/2012 10:26 AM, Terren Suydam wrote:

I certainly will. In the meantime, do you have an example from  
Damasio

(or any other source) that could shed light on the pain/pleasure
phenomenon?

Terren

http://www.hedweb.com/bgcharlton/damasioreview.html


I think emotions represent something above and beyond the more
fundamental feelings of pleasure and pain. Fear, for example, is
explainable using Damasio's framework as such, and I can translate it
to the way I am asking the question as above:

Question: What kind of organization arose during the evolutionary
process that led directly to the subjective experience of fear?
Answer: A cognitive architecture in which internal body states are
modeled and integrated using the same representational apparatus that
models the external world, so that one's adaptive responses
(fight/flight/freeze) to threatening stimuli become integrated into
the organism's cognitive state of affairs.  In short, fear is what it
feels like to have a fear response (as manifest in the body by  
various

hormonal responses) to some real or imagined stimuli.

You can substitute any emotion for fear, so long as you can identify
the way that emotion manifests in the body/brain in terms of hormonal
or other mechanisms. But when it comes to pain and pleasure, I don't
think that it is necessary to have such an advanced cognitive
architecture, I think. So on a more fundamental level, the question
remains:

What kind of organization arose during the evolutionary process that
led directly to the subjective experience of pain and pleasure?

Or put another way, what kind of mechanism feels pleasurable or
painful from the inside?

Presumably the answer to this question occurred earlier in the
evolutionary process than the emergence of fear, surprise, hunger,  
and

so on.

Terren


To go a little further with this, take sexual orgasm. What is
happening during orgasm that makes it so pleasurable?

Presumably there are special circuits in the brain that get activated,
which correlate to the flush of orgasmic pleasure. But what is special
about those circuits?  From a 3p perspective, how is one brain circuit
differentiated from another?  It can't be as simple as the
neurotransmitters involved; what would make one neurotransmitter be
causative of pain and another of pleasure?  It's shape?  That seems
absurd.

It seems that the consequence of that neural circuit firing would have
to achieve some kind of systemic effect that is characterized... how?

Pain is just as mysterious. It's not as simple as what it feels like
for a system to become damaged. Phantom limbs, for example, are often
excruciatingly painful. Pain is clearly in the mind. What cognitive
mechanism could you characterize as feeling painful from the inside?

Failure to account for this in mechanistic terms, for me, is a direct
threat to the legitimacy of mechanism.


Failure to account for this in *any* 3p sense would be a direct threat  
to the legitimacy of science.


I am not sure only mechanism is in difficulty here, unless you have a  
reason to believe that infinities could explain the pain quale.


On the contrary mechanism explains that there is an unavoidable clash  
between the 1p view and the 3p view. The 1p view (Bp  p, say) is the  
same as the 3p view (Bp), but this is only known by the divine  
intellect (G*). It cannot be known by the correct machine itself. So  
mechanism (or weaker) *can* explain why the 1p seems non mechanical,  
and in some sense is not 1p-mechanical, which explains why we feel  
something like a dualism. This dualism really exist epistemologically,  
even if the divine intellect (G*) knows that is an illusion. It is a  
real self-referentially correct illusion.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-25 Thread Terren Suydam
On Fri, Feb 24, 2012 at 6:20 PM, acw a...@lavabit.com wrote:
 Pain - immediate actions, random or not, to specific dangerous stimuli.
 Aversion/avoidance in more complex organisms (such as those capable of
 expecting or predicting painful stimuli).
 Pleasure - reduced or repeated same actions, to specific pelasurable
 stimuli. Pleasure seeking behavior in more complex organisms (such as those
 capable of expecting or predicting pleasurable stimuli).
 Stimuli can be both internal (emotion) or external (senses).
 Obviously for beings as complex as humans the nature of certain emotions can
 be much more complex than that because they are mixed in with many others,
 but I think that's what the simplest behavioral characterization of
 pain/pleasure that I know of.

Just a quick response acw, to say thanks for your responses. I
basically agree with everything you've said, it makes total sense to
me. And yet, I am not doing a very good job of expressing myself (to
myself even) because I'm still not satisfied. There's an intuitive
sense I have of a problem that's deeper than anything I've been able
to express so far... and it may turn out to be something else, but I
won't know until I can communicate it. I also won't have much time in
the next week for anything so didn't want to leave you hanging. So I
will reflect on what you and Bruno have said... thanks again.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-25 Thread Terren Suydam
On Sat, Feb 25, 2012 at 4:17 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 On 24 Feb 2012, at 21:51, Terren Suydam wrote:

 On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 On Fri, Feb 24, 2012 at 2:27 PM, meekerdb meeke...@verizon.net wrote:

 On 2/24/2012 10:26 AM, Terren Suydam wrote:

 I certainly will. In the meantime, do you have an example from Damasio
 (or any other source) that could shed light on the pain/pleasure
 phenomenon?

 Terren

 http://www.hedweb.com/bgcharlton/damasioreview.html


 I think emotions represent something above and beyond the more
 fundamental feelings of pleasure and pain. Fear, for example, is
 explainable using Damasio's framework as such, and I can translate it
 to the way I am asking the question as above:

 Question: What kind of organization arose during the evolutionary
 process that led directly to the subjective experience of fear?
 Answer: A cognitive architecture in which internal body states are
 modeled and integrated using the same representational apparatus that
 models the external world, so that one's adaptive responses
 (fight/flight/freeze) to threatening stimuli become integrated into
 the organism's cognitive state of affairs.  In short, fear is what it
 feels like to have a fear response (as manifest in the body by various
 hormonal responses) to some real or imagined stimuli.

 You can substitute any emotion for fear, so long as you can identify
 the way that emotion manifests in the body/brain in terms of hormonal
 or other mechanisms. But when it comes to pain and pleasure, I don't
 think that it is necessary to have such an advanced cognitive
 architecture, I think. So on a more fundamental level, the question
 remains:

 What kind of organization arose during the evolutionary process that
 led directly to the subjective experience of pain and pleasure?

 Or put another way, what kind of mechanism feels pleasurable or
 painful from the inside?

 Presumably the answer to this question occurred earlier in the
 evolutionary process than the emergence of fear, surprise, hunger, and
 so on.

 Terren


 To go a little further with this, take sexual orgasm. What is
 happening during orgasm that makes it so pleasurable?

 Presumably there are special circuits in the brain that get activated,
 which correlate to the flush of orgasmic pleasure. But what is special
 about those circuits?  From a 3p perspective, how is one brain circuit
 differentiated from another?  It can't be as simple as the
 neurotransmitters involved; what would make one neurotransmitter be
 causative of pain and another of pleasure?  It's shape?  That seems
 absurd.

 It seems that the consequence of that neural circuit firing would have
 to achieve some kind of systemic effect that is characterized... how?

 Pain is just as mysterious. It's not as simple as what it feels like
 for a system to become damaged. Phantom limbs, for example, are often
 excruciatingly painful. Pain is clearly in the mind. What cognitive
 mechanism could you characterize as feeling painful from the inside?

 Failure to account for this in mechanistic terms, for me, is a direct
 threat to the legitimacy of mechanism.


 Failure to account for this in *any* 3p sense would be a direct threat to
 the legitimacy of science.

 I am not sure only mechanism is in difficulty here, unless you have a reason
 to believe that infinities could explain the pain quale.

 On the contrary mechanism explains that there is an unavoidable clash
 between the 1p view and the 3p view. The 1p view (Bp  p, say) is the same
 as the 3p view (Bp), but this is only known by the divine intellect (G*).
 It cannot be known by the correct machine itself. So mechanism (or weaker)
 *can* explain why the 1p seems non mechanical, and in some sense is not
 1p-mechanical, which explains why we feel something like a dualism. This
 dualism really exist epistemologically, even if the divine intellect (G*)
 knows that is an illusion. It is a real self-referentially correct
 illusion.

 Bruno


Hi Bruno,

I'm with you... See my response to acw... I need to think some more on
it. Thanks for your replies.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-25 Thread meekerdb

On 2/25/2012 7:15 AM, Terren Suydam wrote:

On Fri, Feb 24, 2012 at 6:20 PM, acwa...@lavabit.com  wrote:

Pain - immediate actions, random or not, to specific dangerous stimuli.
Aversion/avoidance in more complex organisms (such as those capable of
expecting or predicting painful stimuli).
Pleasure - reduced or repeated same actions, to specific pelasurable
stimuli. Pleasure seeking behavior in more complex organisms (such as those
capable of expecting or predicting pleasurable stimuli).
Stimuli can be both internal (emotion) or external (senses).
Obviously for beings as complex as humans the nature of certain emotions can
be much more complex than that because they are mixed in with many others,
but I think that's what the simplest behavioral characterization of
pain/pleasure that I know of.

Just a quick response acw, to say thanks for your responses. I
basically agree with everything you've said, it makes total sense to
me. And yet, I am not doing a very good job of expressing myself (to
myself even) because I'm still not satisfied. There's an intuitive
sense I have of a problem that's deeper than anything I've been able
to express so far... and it may turn out to be something else, but I
won't know until I can communicate it.


You might ask yourself, What form would a satisfactory answer to my problem 
take?

Brent



I also won't have much time in
the next week for anything so didn't want to leave you hanging. So I
will reflect on what you and Bruno have said... thanks again.

Terren



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread Terren Suydam
Saying evolution created pain and pleasure is a bit of a cop out. When we
say evolution created mammals, we can theorize about a progression of
material forms (and environments) that led to mammals.

So *how* did evolution do that? What sort of progression could you theorize
about that led to pain and pleasure? I think to do that, assuming
mechanism, you still have to come up with something that maps those
feelings to 3p processes.

Terren
On Feb 24, 2012 12:20 AM, meekerdb meeke...@verizon.net wrote:

 On 2/23/2012 6:00 PM, Terren Suydam wrote:

 On Thu, Feb 23, 2012 at 7:21 PM, meekerdbmeeke...@verizon.net  wrote:

 On 2/23/2012 2:49 PM, Terren Suydam wrote:

 As wild or counter-intuitive as it may be though, it really has no
 consequences to speak of in the ordinary, mundane living of life. To
 paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
 other hand, once AGIs start to appear, or we begin to merge more
 explicitly with machines, then the theories become more important.
 Perhaps then comp will be made illegal, so as to constrain freedoms
 given to machines.  I could certainly see there being significant
 resistance to humans augmenting their brains with computers... maybe
 that would be illegal too, in the interest of control or keeping a
 level playing field. Is that what you mean?


 There will be legal and ethical questions about how we and machines
 should
 treat one another. Just being conscious won't mean much though.  As
 Jeremy
 Bentham said of animals, It's not whether they can think, it's whether
 they
 can suffer.

 Brent

 That brings up the interesting question of how you could explain which
 conscious beings are capable of suffering and which ones aren't. I'm
 sure some people would make the argument that anything we might call
 conscious would be capable of suffering. One way or the other it would
 seem to require a theory of consciousness in which the character of
 experience can be mapped somehow to 3p processes.

 For instance, pain I can make sense of in terms of what it feels like
 for a being's structure to become less organized though I'm not sure
 how to formalize that, and I'm not completely comfortable with that
 characterization. However, the reverse idea that pleasure might be
 what it feels like for one's structure to become more organized
 seems like a stretch and hard to connect with the reality of, for
 example, a nice massage.


 I don't think becoming more or less organized has any direct bearing on
 pain or pleasure. Physical pain and pleasure are reactions built-in by
 evolution for survival benefits. If a fire makes you too hot, you move away
 from it, even though it's not disorganizing you.  On the other hand,
 cancer is generally painless in its early stages.  And psychological
 suffering can be very bad without any physical damage.  I don't think
 suffering requires consciousness, at least not human-like consciousness,
 but psychological suffering might require consciousness in the form of
 self-reflection.

 Brent



 Terren


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to everything-list+unsubscribe@
 **googlegroups.com everything-list%2bunsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread Bruno Marchal


On 23 Feb 2012, at 23:49, Terren Suydam wrote:

On Thu, Feb 23, 2012 at 4:12 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 22 Feb 2012, at 23:07, Terren Suydam wrote:
Here was the aha! moment. I get it now. Thanks to you and Quentin.
Even though I am well aware of the consequences of MGA, I was  
focusing

on the physical activity of the simulation because I was running
it.


Yes, that's why reasoning and logic is important. It is  
understandable that
evolution could not have prepared us to the possibly true 'big  
picture, nor
for fundamental science, nor for quickly developing technologies.  
So it
needs some effort to abstract us from build-in prejudices. Nature,  
a bit
like bandits, is opportunist. At the same time we don't have to  
brush away
that intuition, because it is real, and it has succeeded to bring  
us here

and now, and that has to be respected somehow too.
Note that the math confirms this misunderstanding between the
heart/intuition/first-person/right-brain (modeled by Bp  p) and the
scientist/reasoner/left-brain (modeled by Bp). The tension appears  
right at
the start, when a self-aware substructure begin to differentiate  
itself from

its neighborhood.




The fascinating thing for me is, if instead of a scan of Mary, we run
an AGI that embodies a cognitive architecture that satisfies a theory
of consciousness (the kind of theory that explains why a particular  
UM
is conscious) so that if we assume the theory, it entails that the  
AGI
is conscious. The AGI will therefore have 1p indeterminacy even if  
the
sim is deterministic, for the same reason Mary does, because there  
are
an infinity of divergent computational paths that go through the  
AGI's

1p state in any given moment. Trippy!


Yeah. Trippy is the word.
Many people reacts to comp in a strikingly similar way than other  
numerous
people react to the very potent Salvia divinorum hallucinogen.  
People needs

a very sincere interest in the fundamentals to appreciate the comp
consequence, or to appreciate potent dissociative hallucinogen.
I should not insist on this. Some would conclude we should make comp
illegal. Like thinking by oneself is never appreciated in the  
neighborhood
of those who want to think for the others, and control/manipulate  
them.


As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life.


Not direct. But it might help to adapt our mentality. It reminds us of  
many of our possible prejudice, even of comp is revealed false one  
day. And then it will help in fundamental physics, which can also have  
indirect repercussion.
It can change also the conception of death, and that has always  
repercussion on life, for the best and the worth.






To
paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.


Yes, and no. Fundamental theology is negative. It will just warn to  
people to be cautious with their Gödel number. better to encrypt  
them, perhaps quantum mechanically, because if you lost some of your  
number, you might be reconstituted in unexpected places. It warns also  
on the difficulty and difficulties of afterlife, and some of them will  
depend on our ability to transmits values to our descendants.







Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


When liars take power, nothing free is legal, and prohibition rules.  
It never works on the long run, but people can make enormous benefits  
in the short run. Prohibition is a gangster technic to steal  
everybody, by selling fears and lie. It is made possible by that  
mentality which makes some human accepting that other humans can think  
for them in the matter of their own happiness. It is the case of many  
(pseudo) religion and medicine. We have to separate church and state,  
but also health and state, that's possible with simple and reasonable  
laws, but the manipulators hate all this.


When a government steals your money, it does not like some much that  
people can think. Not talking about thinking machine, which for them  
can only be a sort mexicans or something. I mean a foreigner.


The unavoidable tension between freedom and security will always  
incite the fear selling business, so that freedom asks for perpetual  
vigilance and resistance, out of the net and on internet, actually.


Prohibition can never work, unless you send *all* universal numbers in  
camps.
Concretely, starting from a rich position, prohibition always works  
for some time, because its hidden goal consists in managing untaxed  
underground mafia economy, not to, 

Re: UD* and consciousness

2012-02-24 Thread meekerdb

On 2/24/2012 5:54 AM, Terren Suydam wrote:


Saying evolution created pain and pleasure is a bit of a cop out. When we say evolution 
created mammals, we can theorize about a progression of material forms (and 
environments) that led to mammals.


So *how* did evolution do that?



Of course evolution does everything the same way: random variation and 
reproductive selection.

What sort of progression could you theorize about that led to pain and pleasure? I think 
to do that, assuming mechanism, you still have to come up with something that maps those 
feelings to 3p processes.




Sure. Look at some of the books by Anotonio Damasio, e.g. The Feeling of What 
Happens.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread Terren Suydam
On Fri, Feb 24, 2012 at 1:12 PM, meekerdb meeke...@verizon.net wrote:
 On 2/24/2012 5:54 AM, Terren Suydam wrote:


 Saying evolution created pain and pleasure is a bit of a cop out. When we
 say evolution created mammals, we can theorize about a progression of
 material forms (and environments) that led to mammals.

 So *how* did evolution do that?


 Of course evolution does everything the same way: random variation and
 reproductive selection.

What I mean is, at what point in the evolutionary process does the
experience of pain and pleasure emerge?

For instance, we could say of the experience of color, that it emerged
when evolution produced organisms with multiple photoreceptors that
are sensitive to light of different wavelengths.

So what kind of organization arose during the evolutionary process
that led directly to the subjective experience of pain and pleasure?

 What sort of progression could you theorize about that led to pain and
 pleasure? I think to do that, assuming mechanism, you still have to come up
 with something that maps those feelings to 3p processes.


 Sure. Look at some of the books by Anotonio Damasio, e.g. The Feeling of
 What Happens.

 Brent

I certainly will. In the meantime, do you have an example from Damasio
(or any other source) that could shed light on the pain/pleasure
phenomenon?

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread meekerdb

On 2/24/2012 10:26 AM, Terren Suydam wrote:

I certainly will. In the meantime, do you have an example from Damasio
(or any other source) that could shed light on the pain/pleasure
phenomenon?

Terren

http://www.hedweb.com/bgcharlton/damasioreview.html

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread Terren Suydam
On Fri, Feb 24, 2012 at 2:27 PM, meekerdb meeke...@verizon.net wrote:
 On 2/24/2012 10:26 AM, Terren Suydam wrote:

 I certainly will. In the meantime, do you have an example from Damasio
 (or any other source) that could shed light on the pain/pleasure
 phenomenon?

 Terren

 http://www.hedweb.com/bgcharlton/damasioreview.html

I think emotions represent something above and beyond the more
fundamental feelings of pleasure and pain. Fear, for example, is
explainable using Damasio's framework as such, and I can translate it
to the way I am asking the question as above:

Question: What kind of organization arose during the evolutionary
process that led directly to the subjective experience of fear?
Answer: A cognitive architecture in which internal body states are
modeled and integrated using the same representational apparatus that
models the external world, so that one's adaptive responses
(fight/flight/freeze) to threatening stimuli become integrated into
the organism's cognitive state of affairs.  In short, fear is what it
feels like to have a fear response (as manifest in the body by various
hormonal responses) to some real or imagined stimuli.

You can substitute any emotion for fear, so long as you can identify
the way that emotion manifests in the body/brain in terms of hormonal
or other mechanisms. But when it comes to pain and pleasure, I don't
think that it is necessary to have such an advanced cognitive
architecture, I think. So on a more fundamental level, the question
remains:

What kind of organization arose during the evolutionary process that
led directly to the subjective experience of pain and pleasure?

Or put another way, what kind of mechanism feels pleasurable or
painful from the inside?

Presumably the answer to this question occurred earlier in the
evolutionary process than the emergence of fear, surprise, hunger, and
so on.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread Terren Suydam
On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydam terren.suy...@gmail.com wrote:
 On Fri, Feb 24, 2012 at 2:27 PM, meekerdb meeke...@verizon.net wrote:
 On 2/24/2012 10:26 AM, Terren Suydam wrote:

 I certainly will. In the meantime, do you have an example from Damasio
 (or any other source) that could shed light on the pain/pleasure
 phenomenon?

 Terren

 http://www.hedweb.com/bgcharlton/damasioreview.html

 I think emotions represent something above and beyond the more
 fundamental feelings of pleasure and pain. Fear, for example, is
 explainable using Damasio's framework as such, and I can translate it
 to the way I am asking the question as above:

 Question: What kind of organization arose during the evolutionary
 process that led directly to the subjective experience of fear?
 Answer: A cognitive architecture in which internal body states are
 modeled and integrated using the same representational apparatus that
 models the external world, so that one's adaptive responses
 (fight/flight/freeze) to threatening stimuli become integrated into
 the organism's cognitive state of affairs.  In short, fear is what it
 feels like to have a fear response (as manifest in the body by various
 hormonal responses) to some real or imagined stimuli.

 You can substitute any emotion for fear, so long as you can identify
 the way that emotion manifests in the body/brain in terms of hormonal
 or other mechanisms. But when it comes to pain and pleasure, I don't
 think that it is necessary to have such an advanced cognitive
 architecture, I think. So on a more fundamental level, the question
 remains:

 What kind of organization arose during the evolutionary process that
 led directly to the subjective experience of pain and pleasure?

 Or put another way, what kind of mechanism feels pleasurable or
 painful from the inside?

 Presumably the answer to this question occurred earlier in the
 evolutionary process than the emergence of fear, surprise, hunger, and
 so on.

 Terren

To go a little further with this, take sexual orgasm. What is
happening during orgasm that makes it so pleasurable?

Presumably there are special circuits in the brain that get activated,
which correlate to the flush of orgasmic pleasure. But what is special
about those circuits?  From a 3p perspective, how is one brain circuit
differentiated from another?  It can't be as simple as the
neurotransmitters involved; what would make one neurotransmitter be
causative of pain and another of pleasure?  It's shape?  That seems
absurd.

It seems that the consequence of that neural circuit firing would have
to achieve some kind of systemic effect that is characterized... how?

Pain is just as mysterious. It's not as simple as what it feels like
for a system to become damaged. Phantom limbs, for example, are often
excruciatingly painful. Pain is clearly in the mind. What cognitive
mechanism could you characterize as feeling painful from the inside?

Failure to account for this in mechanistic terms, for me, is a direct
threat to the legitimacy of mechanism.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-24 Thread acw

On 2/24/2012 20:51, Terren Suydam wrote:

On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydamterren.suy...@gmail.com  wrote:

On Fri, Feb 24, 2012 at 2:27 PM, meekerdbmeeke...@verizon.net  wrote:

On 2/24/2012 10:26 AM, Terren Suydam wrote:

I certainly will. In the meantime, do you have an example from Damasio
(or any other source) that could shed light on the pain/pleasure
phenomenon?

Terren

http://www.hedweb.com/bgcharlton/damasioreview.html


I think emotions represent something above and beyond the more
fundamental feelings of pleasure and pain. Fear, for example, is
explainable using Damasio's framework as such, and I can translate it
to the way I am asking the question as above:

Question: What kind of organization arose during the evolutionary
process that led directly to the subjective experience of fear?
Answer: A cognitive architecture in which internal body states are
modeled and integrated using the same representational apparatus that
models the external world, so that one's adaptive responses
(fight/flight/freeze) to threatening stimuli become integrated into
the organism's cognitive state of affairs.  In short, fear is what it
feels like to have a fear response (as manifest in the body by various
hormonal responses) to some real or imagined stimuli.

Yes, that seems to be mostly it, but it's subtler than that. Those 
internal states that we have also include expectations and emotional 
memories - it can lead to the memory recall of various past sensations 
and experiences. Certain internal states will make certain behaviors 
more likely and certain thoughts (other internal states) more likely. We 
cannot communicate the exact nature of what internal states actually are 
- the qualia, but beyond a certain point we cannot say anything more 
than that we have them and us having them will usually correspond to 
some internal states in our instance of a cognitive architecture.

You can substitute any emotion for fear, so long as you can identify
the way that emotion manifests in the body/brain in terms of hormonal
or other mechanisms. But when it comes to pain and pleasure, I don't
think that it is necessary to have such an advanced cognitive
architecture, I think. So on a more fundamental level, the question
remains:

What kind of organization arose during the evolutionary process that
led directly to the subjective experience of pain and pleasure?
That's a very interesting question. Pain and fear means aversion towards 
certain stimuli - that is, reducing the frequency that some stimuli will 
be experienced, which can lead to increased survivability. Pain is 
unfortunately a bit more complicated than that, it leads not only to 
future aversion, but involuntary action-taking - forcing an immediate 
quick response, which may not be backed by conscious thought. It can be 
seen as unpleasant, because it combines the memory of constantly being 
forced to have to take involuntary actions and the actions being 
aversive. Such involuntary actions can also be seen as a huge change in 
attention (allocation) - one becomes much less capable of consciously 
directing their attention.


Pleasure is similar, but in reverse - it makes certain actions more 
likely to be performed, possibly even leading to some feedback loops. 
However, it seems that in humans, pleasure and compulsion have similar 
and almost parallel circuits, but are not identical. Pleasure may also 
have calming effects by reducing responses/actions instantly, the 
opposite of pain, while also making it more likely that actions that 
caused pleasure to be performed again - which is a bit similar to 
compulsion. In a nutshell, they correspond to mechanisms which lead to 
certain actions being more or less likely, and this eventually leads to 
complex goals and behavior - I'd say that's a huge reason for 
pain/pleasure responses to have evolved.


Or put another way, what kind of mechanism feels pleasurable or
painful from the inside?

The notion of feeling is more complicated because it involves memories 
and complex feedback loops.

Presumably the answer to this question occurred earlier in the
evolutionary process than the emergence of fear, surprise, hunger, and
so on.
I like these articles/videos on how AGIs may get emergent emotions from 
simple basic drives:


http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation

http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation-2

http://agi-school.org/2009/dr-joscha-bach-the-micropsi-architecture

http://www.cognitive-ai.com/




Terren


To go a little further with this, take sexual orgasm. What is
happening during orgasm that makes it so pleasurable?

My guess is that it's a fairly complex emotional and somatic response 
that could get broken down into simpler parts. You could ask the same 
question differently: what makes some music good? what makes some food 
delicious? what makes a picture 

Re: UD* and consciousness

2012-02-24 Thread Terren Suydam
On Fri, Feb 24, 2012 at 4:47 PM, acw a...@lavabit.com wrote:
 On 2/24/2012 20:51, Terren Suydam wrote:

 On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydamterren.suy...@gmail.com
  wrote:

 On Fri, Feb 24, 2012 at 2:27 PM, meekerdbmeeke...@verizon.net  wrote:

 On 2/24/2012 10:26 AM, Terren Suydam wrote:

 I certainly will. In the meantime, do you have an example from Damasio
 (or any other source) that could shed light on the pain/pleasure
 phenomenon?

 Terren

 http://www.hedweb.com/bgcharlton/damasioreview.html


 I think emotions represent something above and beyond the more
 fundamental feelings of pleasure and pain. Fear, for example, is
 explainable using Damasio's framework as such, and I can translate it
 to the way I am asking the question as above:

 Question: What kind of organization arose during the evolutionary
 process that led directly to the subjective experience of fear?
 Answer: A cognitive architecture in which internal body states are
 modeled and integrated using the same representational apparatus that
 models the external world, so that one's adaptive responses
 (fight/flight/freeze) to threatening stimuli become integrated into
 the organism's cognitive state of affairs.  In short, fear is what it
 feels like to have a fear response (as manifest in the body by various
 hormonal responses) to some real or imagined stimuli.

 Yes, that seems to be mostly it, but it's subtler than that. Those internal
 states that we have also include expectations and emotional memories - it
 can lead to the memory recall of various past sensations and experiences.
 Certain internal states will make certain behaviors more likely and certain
 thoughts (other internal states) more likely. We cannot communicate the
 exact nature of what internal states actually are - the qualia, but beyond a
 certain point we cannot say anything more than that we have them and us
 having them will usually correspond to some internal states in our instance
 of a cognitive architecture.

 You can substitute any emotion for fear, so long as you can identify
 the way that emotion manifests in the body/brain in terms of hormonal
 or other mechanisms. But when it comes to pain and pleasure, I don't
 think that it is necessary to have such an advanced cognitive
 architecture, I think. So on a more fundamental level, the question
 remains:

 What kind of organization arose during the evolutionary process that
 led directly to the subjective experience of pain and pleasure?

 That's a very interesting question. Pain and fear means aversion towards
 certain stimuli - that is, reducing the frequency that some stimuli will be
 experienced, which can lead to increased survivability. Pain is
 unfortunately a bit more complicated than that, it leads not only to future
 aversion, but involuntary action-taking - forcing an immediate quick
 response, which may not be backed by conscious thought. It can be seen as
 unpleasant, because it combines the memory of constantly being forced to
 have to take involuntary actions and the actions being aversive. Such
 involuntary actions can also be seen as a huge change in attention
 (allocation) - one becomes much less capable of consciously directing their
 attention.

All of that makes sense, but pain is more than unpleasant. Pain can be
blindingly horrible... ask any migraine sufferer. What accounts for
the intensity of such experiences? I'm asking this in terms of how,
not why. How does it get to be so intense.

 Pleasure is similar, but in reverse - it makes certain actions more likely
 to be performed, possibly even leading to some feedback loops. However, it
 seems that in humans, pleasure and compulsion have similar and almost
 parallel circuits, but are not identical. Pleasure may also have calming
 effects by reducing responses/actions instantly, the opposite of pain, while
 also making it more likely that actions that caused pleasure to be performed
 again - which is a bit similar to compulsion. In a nutshell, they correspond
 to mechanisms which lead to certain actions being more or less likely, and
 this eventually leads to complex goals and behavior - I'd say that's a huge
 reason for pain/pleasure responses to have evolved.

I have the same issue with this description of pleasure. What accounts
for the intensity of peak pleasure experiences?


 Or put another way, what kind of mechanism feels pleasurable or
 painful from the inside?

 The notion of feeling is more complicated because it involves memories and
 complex feedback loops.

 Presumably the answer to this question occurred earlier in the
 evolutionary process than the emergence of fear, surprise, hunger, and
 so on.

 I like these articles/videos on how AGIs may get emergent emotions from
 simple basic drives:

 http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation

 http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-emotion-and-mental-representation-2

 

Re: UD* and consciousness

2012-02-24 Thread acw

On 2/24/2012 22:20, Terren Suydam wrote:

On Fri, Feb 24, 2012 at 4:47 PM, acwa...@lavabit.com  wrote:

On 2/24/2012 20:51, Terren Suydam wrote:


On Fri, Feb 24, 2012 at 3:30 PM, Terren Suydamterren.suy...@gmail.com
  wrote:


On Fri, Feb 24, 2012 at 2:27 PM, meekerdbmeeke...@verizon.netwrote:


On 2/24/2012 10:26 AM, Terren Suydam wrote:

I certainly will. In the meantime, do you have an example from Damasio
(or any other source) that could shed light on the pain/pleasure
phenomenon?

Terren

http://www.hedweb.com/bgcharlton/damasioreview.html



I think emotions represent something above and beyond the more
fundamental feelings of pleasure and pain. Fear, for example, is
explainable using Damasio's framework as such, and I can translate it
to the way I am asking the question as above:

Question: What kind of organization arose during the evolutionary
process that led directly to the subjective experience of fear?
Answer: A cognitive architecture in which internal body states are
modeled and integrated using the same representational apparatus that
models the external world, so that one's adaptive responses
(fight/flight/freeze) to threatening stimuli become integrated into
the organism's cognitive state of affairs.  In short, fear is what it
feels like to have a fear response (as manifest in the body by various
hormonal responses) to some real or imagined stimuli.


Yes, that seems to be mostly it, but it's subtler than that. Those internal
states that we have also include expectations and emotional memories - it
can lead to the memory recall of various past sensations and experiences.
Certain internal states will make certain behaviors more likely and certain
thoughts (other internal states) more likely. We cannot communicate the
exact nature of what internal states actually are - the qualia, but beyond a
certain point we cannot say anything more than that we have them and us
having them will usually correspond to some internal states in our instance
of a cognitive architecture.


You can substitute any emotion for fear, so long as you can identify
the way that emotion manifests in the body/brain in terms of hormonal
or other mechanisms. But when it comes to pain and pleasure, I don't
think that it is necessary to have such an advanced cognitive
architecture, I think. So on a more fundamental level, the question
remains:

What kind of organization arose during the evolutionary process that
led directly to the subjective experience of pain and pleasure?


That's a very interesting question. Pain and fear means aversion towards
certain stimuli - that is, reducing the frequency that some stimuli will be
experienced, which can lead to increased survivability. Pain is
unfortunately a bit more complicated than that, it leads not only to future
aversion, but involuntary action-taking - forcing an immediate quick
response, which may not be backed by conscious thought. It can be seen as
unpleasant, because it combines the memory of constantly being forced to
have to take involuntary actions and the actions being aversive. Such
involuntary actions can also be seen as a huge change in attention
(allocation) - one becomes much less capable of consciously directing their
attention.


All of that makes sense, but pain is more than unpleasant. Pain can be
blindingly horrible... ask any migraine sufferer. What accounts for
the intensity of such experiences? I'm asking this in terms of how,
not why. How does it get to be so intense.

Intense pain can make us scream or do things we would never do normally 
- irrational responses, but possibly advantageous when they first 
evolved. We could make a mechanistic theory for how pain manifests. 
Someone might suppress their reactions to pain with effort, but that 
doesn't mean that there weren't circuits triggered that would have led 
to certain actions if not for conscious effort (attention allocation) 
involved in preventing such behavior. Maybe we could see pain as the 
intense desire to perform certain immediate actions in response to some 
stimuli, against our better judgement. In the mechanistic version (when 
we look at the architecture and what it represents) we would see that 
the most likely outcome would be such random actions being performed.
Actually accounting for the exact nature of the internal state beyond 
communicable parts (intensity of desire, involuntary reactions, etc) 
might not even be possible for any such theory. At best we might end up 
translating - X is a locally accessible goal, we expect goal X to lead 
to pleasure or fulfillment of subgoals or expectation of state to change 
in what we expect to be our favor or ... as we desire X. Many similar 
translations could be done for other emotional responses and more basic 
drives - the body can only do, but we think we can want. Thinking 
about this in detail in the a mechanistic framework tends to end up as a 
deconstruction/explanation for what exactly will is.

Pleasure is similar, but in 

Re: UD* and consciousness

2012-02-23 Thread Terren Suydam
On Thu, Feb 23, 2012 at 4:12 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 On 22 Feb 2012, at 23:07, Terren Suydam wrote:
 Here was the aha! moment. I get it now. Thanks to you and Quentin.
 Even though I am well aware of the consequences of MGA, I was focusing
 on the physical activity of the simulation because I was running
 it.


 Yes, that's why reasoning and logic is important. It is understandable that
 evolution could not have prepared us to the possibly true 'big picture, nor
 for fundamental science, nor for quickly developing technologies. So it
 needs some effort to abstract us from build-in prejudices. Nature, a bit
 like bandits, is opportunist. At the same time we don't have to brush away
 that intuition, because it is real, and it has succeeded to bring us here
 and now, and that has to be respected somehow too.
 Note that the math confirms this misunderstanding between the
 heart/intuition/first-person/right-brain (modeled by Bp  p) and the
 scientist/reasoner/left-brain (modeled by Bp). The tension appears right at
 the start, when a self-aware substructure begin to differentiate itself from
 its neighborhood.




 The fascinating thing for me is, if instead of a scan of Mary, we run
 an AGI that embodies a cognitive architecture that satisfies a theory
 of consciousness (the kind of theory that explains why a particular UM
 is conscious) so that if we assume the theory, it entails that the AGI
 is conscious. The AGI will therefore have 1p indeterminacy even if the
 sim is deterministic, for the same reason Mary does, because there are
 an infinity of divergent computational paths that go through the AGI's
 1p state in any given moment. Trippy!


 Yeah. Trippy is the word.
 Many people reacts to comp in a strikingly similar way than other numerous
 people react to the very potent Salvia divinorum hallucinogen. People needs
 a very sincere interest in the fundamentals to appreciate the comp
 consequence, or to appreciate potent dissociative hallucinogen.
 I should not insist on this. Some would conclude we should make comp
 illegal. Like thinking by oneself is never appreciated in the neighborhood
 of those who want to think for the others, and control/manipulate them.

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?

Terren

 This I disagree with (or don't understand) because if we acknowledge
 that as you said even just one emulation can be said involving
 consciousness then interacting with even a single Mary is an
 interaction with her soul in platonia. I think the admission of any
 zombie in any context (assuming comp) is a refutation of comp.


 You are right. That's why I prefer to say that comp entails non zombie. But
 let me give you a thought experience which *seems* to show that a notion of
 zombie looks possible with comp, and let us see what is wrong with that.

 Let us start from the beginning of MGA, or quite similar. You have a teacher
 doing a course in math (say). Then, by some weird event, his brain vanishes,
 but a cosmic explosion, by an extreme luck, send the correct information,
 with respect to that very particular math lesson, at the entry of the motor
 nerves interfaces to the muscles of the teacher, so that the lesson continue
 like normal. The students keep interrupting the teacher, asking questions,
 and everything is fine; the teacher provides the relevant answers (by luck).
 Is the teacher-without-brain a zombie? At first sight, it looks like one,
 even with comp. He behaves like a human, but the processing in the brain is
 just absent. He acts normal by pure chance, with a very small amount of
 peripheral interface brain activity. So what?
 Again, the solution is that the consciousness should not be attributed to
 the body activity, but to the teaching person and its logically real genuine
 computation (distributed in Platonia). The concrete brain just interfaces
 the person in a relative correct way, unlike the absent brain + lucky
 cosmic ray, which still attaches it, in this experience, but by pure luck.
 In both case, with real brain or without a brain, the consciousness is
 attached to the computations, not a particular implementation of it which in
 fine is a building of your mind itself attached to an infinity of
 computation.

 We might say that the teacher was a zombie, because he has no brain activity
 at all, but then we might say that even with a brain, he is 

Re: UD* and consciousness

2012-02-23 Thread meekerdb

On 2/23/2012 2:49 PM, Terren Suydam wrote:

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


There will be legal and ethical questions about how we and machines should treat one 
another. Just being conscious won't mean much though.  As Jeremy Bentham said of animals, 
It's not whether they can think, it's whether they can suffer.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-23 Thread Terren Suydam
On Thu, Feb 23, 2012 at 7:21 PM, meekerdb meeke...@verizon.net wrote:
 On 2/23/2012 2:49 PM, Terren Suydam wrote:

 As wild or counter-intuitive as it may be though, it really has no
 consequences to speak of in the ordinary, mundane living of life. To
 paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
 other hand, once AGIs start to appear, or we begin to merge more
 explicitly with machines, then the theories become more important.
 Perhaps then comp will be made illegal, so as to constrain freedoms
 given to machines.  I could certainly see there being significant
 resistance to humans augmenting their brains with computers... maybe
 that would be illegal too, in the interest of control or keeping a
 level playing field. Is that what you mean?


 There will be legal and ethical questions about how we and machines should
 treat one another. Just being conscious won't mean much though.  As Jeremy
 Bentham said of animals, It's not whether they can think, it's whether they
 can suffer.

 Brent

That brings up the interesting question of how you could explain which
conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call
conscious would be capable of suffering. One way or the other it would
seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.

For instance, pain I can make sense of in terms of what it feels like
for a being's structure to become less organized though I'm not sure
how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become more organized
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-23 Thread meekerdb

On 2/23/2012 6:00 PM, Terren Suydam wrote:

On Thu, Feb 23, 2012 at 7:21 PM, meekerdbmeeke...@verizon.net  wrote:

On 2/23/2012 2:49 PM, Terren Suydam wrote:

As wild or counter-intuitive as it may be though, it really has no
consequences to speak of in the ordinary, mundane living of life. To
paraphrase Eliezer Yudkowsky, it has to add up to normal. On the
other hand, once AGIs start to appear, or we begin to merge more
explicitly with machines, then the theories become more important.
Perhaps then comp will be made illegal, so as to constrain freedoms
given to machines.  I could certainly see there being significant
resistance to humans augmenting their brains with computers... maybe
that would be illegal too, in the interest of control or keeping a
level playing field. Is that what you mean?


There will be legal and ethical questions about how we and machines should
treat one another. Just being conscious won't mean much though.  As Jeremy
Bentham said of animals, It's not whether they can think, it's whether they
can suffer.

Brent

That brings up the interesting question of how you could explain which
conscious beings are capable of suffering and which ones aren't. I'm
sure some people would make the argument that anything we might call
conscious would be capable of suffering. One way or the other it would
seem to require a theory of consciousness in which the character of
experience can be mapped somehow to 3p processes.

For instance, pain I can make sense of in terms of what it feels like
for a being's structure to become less organized though I'm not sure
how to formalize that, and I'm not completely comfortable with that
characterization. However, the reverse idea that pleasure might be
what it feels like for one's structure to become more organized
seems like a stretch and hard to connect with the reality of, for
example, a nice massage.


I don't think becoming more or less organized has any direct bearing on pain or pleasure. 
Physical pain and pleasure are reactions built-in by evolution for survival benefits. If a 
fire makes you too hot, you move away from it, even though it's not disorganizing you.  
On the other hand, cancer is generally painless in its early stages.  And psychological 
suffering can be very bad without any physical damage.  I don't think suffering requires 
consciousness, at least not human-like consciousness, but psychological suffering might 
require consciousness in the form of self-reflection.


Brent




Terren



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-22 Thread Quentin Anciaux
2012/2/21 Terren Suydam terren.suy...@gmail.com

 Bruno and others,

 Here's a thought experiment that for me casts doubt on the notion that
 consciousness requires 1p indeterminacy.

 Imagine that we have scanned my friend Mary so that we have a complete
 functional description of her brain (down to some substitution level
 that we are betting on). We run the scan in a simulated classical
 physics. The simulation is completely closed, which is to say,
 deterministic. In other words, we can run the simulation a million
 times for a million years each time, and the state of all them will be
 identical. Now, when we run the simulation, we can ask her (within the
 context of the simulation) Are you conscious, Mary?  Are you aware of
 your thoughts? She replies yes.

 Next, we tweak the simulation in the following way. We plug in a
 source of quantum randomness (random numbers from a quantum random
 number generator) into a simulated water fountain. Now, the simulation
 is no longer deterministic. A million runs of the simulation will
 result in a million different computational states after a million
 years. We ask the same questions of Mary and she replies yes.

 In the deterministic scenario, Mary's computational state is traced an
 infinite number of times in the UD*, but only because of the infinite
 number of ways a particular computational state can be instantiated in
 the UD* (different levels, UD implementing other UDs recursively,
 iteration along the reals, etc). It's a stretch however to say that
 there is 1p indeterminacy, because her computational state as
 implemented in the simulation is deterministic.

 In the second scenario, her computational state is traced in the UD*
 and it is clear there is 1p indeterminacy, as the splitting entailed
 by the quantum number generator brings Mary along, so to speak.

 So if Mary is not conscious in the deterministic scenario, she is a
 zombie. The only way to be consistent with this conclusion is to
 insist that the substitution level must be at the quantum level.

 If OTOH she is conscious, then consciousness does not require 1p
 indeterminacy.


Your determinstic scenario is never alone... there exists (other)
continuations (that you do not runs) in the UD deployment that account for
the counterfactuals (and hence 1p indeterminacy). You're not outside the
UD in the comp frame. It's not because your simulation is deterministic,
that it account for all the measure of mary from her POV. The simulation is
deterministic only relatively to you, from Mary's POV, all continuations
are existing at every point.

Quentin



 Terren

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-22 Thread Bruno Marchal


On 21 Feb 2012, at 21:34, Terren Suydam wrote:

On Tue, Feb 21, 2012 at 12:05 PM, meekerdb meeke...@verizon.net  
wrote:

On 2/21/2012 8:32 AM, Terren Suydam wrote:

So if Mary is not conscious in the deterministic scenario, she is a
zombie. The only way to be consistent with this conclusion is to
insist that the substitution level must be at the quantum level.

If OTOH she is conscious, then consciousness does not require 1p
indeterminacy.



But is it really either-or?  Isn't it likely there are different  
kinds and
degrees of consciousness.  I'm not clear on what Bruno's theory  
says about
this.  On the one hand he says all Lobian machines are (equally?)  
conscious,

but then he says it depends on the program they are executing.

Brent


I'm not too keen on 'partial zombies'. Partial zombies admit full
zombies, as far as I'm concerned.

The idea that consciousness depends on the program a UM executes is
the point of this thought experiment. The idea that consciousness
itself depends on a multiplicity of computational paths going through
the current computational state is what I'm questioning.



But this is not assumed. Even just one emulation can be said involving  
consciousness. The first person indeterminacy is just a consequence,  
and the thought experiment just show that MATTER, not consciousness,  
requires them to stabilize under the substitution level. So  
consciousness does not depend on the first person indeterminacy, but  
it comes from the usual comp-attribution of mind to computations,  
and is used only to determine my most probable next first states on  
which the 1-indeterminacy bears on , like in the WM duplication. OK?


Bruno

PS I hope you will get this answer, because it looks like my server  
has some trouble in sending mail. More comments later.



http://iridia.ulb.ac.be/~marchal/
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-22 Thread Bruno Marchal


On 21 Feb 2012, at 18:05, meekerdb wrote:


On 2/21/2012 8:32 AM, Terren Suydam wrote:

Bruno and others,

Here's a thought experiment that for me casts doubt on the notion  
that

consciousness requires 1p indeterminacy.

Imagine that we have scanned my friend Mary so that we have a  
complete

functional description of her brain (down to some substitution level
that we are betting on). We run the scan in a simulated classical
physics. The simulation is completely closed, which is to say,
deterministic. In other words, we can run the simulation a million
times for a million years each time, and the state of all them will  
be
identical. Now, when we run the simulation, we can ask her (within  
the
context of the simulation) Are you conscious, Mary?  Are you aware  
of

your thoughts? She replies yes.

Next, we tweak the simulation in the following way. We plug in a
source of quantum randomness (random numbers from a quantum random
number generator) into a simulated water fountain. Now, the  
simulation

is no longer deterministic. A million runs of the simulation will
result in a million different computational states after a million
years. We ask the same questions of Mary and she replies yes.

In the deterministic scenario, Mary's computational state is traced  
an

infinite number of times in the UD*, but only because of the infinite
number of ways a particular computational state can be instantiated  
in

the UD* (different levels, UD implementing other UDs recursively,
iteration along the reals, etc). It's a stretch however to say that
there is 1p indeterminacy, because her computational state as
implemented in the simulation is deterministic.

In the second scenario, her computational state is traced in the UD*
and it is clear there is 1p indeterminacy, as the splitting entailed
by the quantum number generator brings Mary along, so to speak.

So if Mary is not conscious in the deterministic scenario, she is a
zombie. The only way to be consistent with this conclusion is to
insist that the substitution level must be at the quantum level.

If OTOH she is conscious, then consciousness does not require 1p  
indeterminacy.


But is it really either-or?  Isn't it likely there are different  
kinds and degrees of consciousness.  I'm not clear on what Bruno's  
theory says about this.  On the one hand he says all Lobian machines  
are (equally?) conscious, but then he says it depends on the program  
they are executing.


Imagine that I am duplicated in W and M. I would say that the guy in M  
and the guy in W are equally conscious, and that both are me, although  
they will feel very different and have different content of  
consciousness.
In that sense I would say that all Löbian machines are equally  
conscious. Of course the Löbian humans have very different experience  
than the jumping spider, and even more different than Peano Arithmetic.


As I said in another post today, I am not sure why Terren thinks that  
that the first person indeterminacy is needed for consciousness. First  
person indeterminacy is implied by the self-multiplication (in the UD,  
say), as a consequence of comp, but is not presented as something  
needed for the existence of consciousness. Mary is conscious in both  
scenario. But comp implies, as Quentin said, that she cannot escape  
the indeterminacy of its many continuations in the UD. It is hoped  
that the QM indeterminacy is just the reflect of the comp  
indeterminacy, so that QM confirms comp. The Everett mutiplication of  
populations of machines in QM would also be an empirical reason to  
assess that comp does not lead to solipsism (which I would take as a  
refutation of comp, if that happen to be the case). The apparition of  
a quantum logic in the material hypostases is a reassuring step in  
that direction.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-22 Thread Bruno Marchal


On 22 Feb 2012, at 15:49, Terren Suydam wrote:

On Wed, Feb 22, 2012 at 9:27 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 21 Feb 2012, at 18:05, meekerdb wrote:

But is it really either-or?  Isn't it likely there are different  
kinds and
degrees of consciousness.  I'm not clear on what Bruno's theory  
says about
this.  On the one hand he says all Lobian machines are (equally?)  
conscious,

but then he says it depends on the program they are executing.



Imagine that I am duplicated in W and M. I would say that the guy  
in M and
the guy in W are equally conscious, and that both are me, although  
they will

feel very different and have different content of consciousness.
In that sense I would say that all Löbian machines are equally  
conscious. Of
course the Löbian humans have very different experience than the  
jumping

spider, and even more different than Peano Arithmetic.

As I said in another post today, I am not sure why Terren thinks  
that that
the first person indeterminacy is needed for consciousness. First  
person
indeterminacy is implied by the self-multiplication (in the UD,  
say), as a

consequence of comp, but is not presented as something needed for the
existence of consciousness. Mary is conscious in both scenario. But  
comp
implies, as Quentin said, that she cannot escape the indeterminacy  
of its
many continuations in the UD. It is hoped that the QM indeterminacy  
is just
the reflect of the comp indeterminacy, so that QM confirms comp.  
The Everett
mutiplication of populations of machines in QM would also be an  
empirical
reason to assess that comp does not lead to solipsism (which I  
would take as
a refutation of comp, if that happen to be the case). The  
apparition of a

quantum logic in the material hypostases is a reassuring step in that
direction.

Bruno


Hey Bruno,

I seem to remember reading a while back that you were saying that the
1p consciousness arises necessarily from the many paths in the UD. I'm
glad to clear up my misunderstanding.


OK. What happens, if there is no flaw in the UDA-MGA, is that your  
futures can only be determined by the statistics bearing on all  
computations going through your state.


The 1p nature of that consciousness will rely on the logic of  
(machine) knowledge (or other modalities), which put some structure on  
the set of accessible computational states.


Sorry for being unclear,  and for the many misspellings, and other  
grammatical tenses atrocities.


The problem is also related to the difficulty of the subject, which is  
necessarily counter-intuitive (in the comp theory), so that we have  
some trouble in using the natural language, which relies on natural  
intuitive prejudices.


In fact I can understand why it might look like I was saying that the  
1p needs the many computations. The reality is that one is enough, but  
the others computations, 1-p undistinguishable, are there to, and even  
for a slight interval of consciousness, we must take into account that  
we are in all of them, for the correct statistics. So the 1p is  
attached to an infinity of computation, once you attach it to just  
one computation.





However I don't understand how Mary could have anything but a single
continuation given the determinism of the sim. How could a
counterfactual arise in this thought experiment? Can you give a
concrete example?


You should really find this by yourself, honestly. It is the only way  
to be really convinced. Normally this follows from the reasoning.

Please ask if you don't find your error.
Oh! I see Quentin found it.

Your mistake consists in believing that when you simulate your friend  
Mary in the deterministic sim, completely closed, as you say, you have  
succeeded to prevent Mary, from her own pov, to escape  your  
simulation. Her 1-indeterminacy remains unchanged, and bears on the  
many computations, existing by the + and * laws, or in the UD.


The counterfactuals, and the indeterminacy comes from the existence of  
an infinity of computations generating Mary's state. Your  
deterministic sim can be runned a million times, it will not change  
Mary's indeterminacy, relatively to the infinities of diverging  
(infinite) computations going through her 1-state.


You might also reason like that. The consciousness of Mary is only in  
Platonia. We have abandoned the idea that consciousness is related to  
any singular physical activity. Her consciousness and other 1p- 
attributes depends only on her arithmetical relative state, relatively  
to the infinity of UMs running her in Platonia. In that sense, all the  
Mary you interact with are zombie, but this is just due to the trivial  
fact that you can interact only with Mary's body or local 3p  
description. Once you grasp that you too are in Platonia, there is no  
more zombie because bodies become only local interface between soul  
in Platonia. But intuition fails us, and that's why we need the math  
and the computer science.


The 

Re: UD* and consciousness

2012-02-22 Thread acw

On 2/22/2012 14:49, Terren Suydam wrote:

However I don't understand how Mary could have anything but a single
continuation given the determinism of the sim. How could a
counterfactual arise in this thought experiment? Can you give a
concrete example?
Mary's brain/SIM implementation is deterministic. We would associate her 
1p with all machines that happen to implement Mary's current state at 
the substitution level chosen. If Mary is lucky(or not), she might find 
herself in your digital physics VR simulation, thus your observation and 
inference of the 3p simulation would match Mary's 1p in that simulation. 
However, consider that in the UD, there would be many implementations 
for Mary's mind at that substitution level, some including that 
environment you chose for her. These implementations may be many times 
layered, for example, those implementing your physics and eventually 
you, and those implementing the physics, the simulation and eventually 
her. Now imagine your simulation has some irrelevant bit of 
functionality, let's say, an opcode RAND or some register 323, that bit 
of functionality was never used in Mary's implementation or of 
implementation of any underlying layers, it's just there in your 
implementation of the simulation. Mary's consciousness would never be 
changed by how you implemented RAND or r323, but let's say, she 
eventually decides to do a bit of programming in her simulation and uses 
that opcode and/or register by accident. What would happen? There can be 
many machines implementing (or even not implementing it at all) said 
opcode and/or register, however since Mary's own experience does not 
depend at all on it, all that part is indeterminate. Now instead of 
register 323 or RAND, make everything that Mary does not depend on and 
that is not inconsistent with her history as something subject to 1p 
invariancy in the UD - you'll find infinities of possible machines 
implementing Mary, even cases where the simulation is self-contained and 
completely disconnected from your physical world, running completely in 
the UD. Of course, I do wonder how stable such a VR reality would be - 
it might not be very high measure like our current quantum world where 
we have degrees of freedom like these everywhere (if MWI).


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-22 Thread Terren Suydam
On Wed, Feb 22, 2012 at 12:29 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 On 22 Feb 2012, at 15:49, Terren Suydam wrote:
 Hey Bruno,

 I seem to remember reading a while back that you were saying that the
 1p consciousness arises necessarily from the many paths in the UD. I'm
 glad to clear up my misunderstanding.


 OK. What happens, if there is no flaw in the UDA-MGA, is that your futures
 can only be determined by the statistics bearing on all computations going
 through your state.

 The 1p nature of that consciousness will rely on the logic of (machine)
 knowledge (or other modalities), which put some structure on the set of
 accessible computational states.

 Sorry for being unclear,  and for the many misspellings, and other
 grammatical tenses atrocities.

 The problem is also related to the difficulty of the subject, which is
 necessarily counter-intuitive (in the comp theory), so that we have some
 trouble in using the natural language, which relies on natural intuitive
 prejudices.

 In fact I can understand why it might look like I was saying that the 1p
 needs the many computations. The reality is that one is enough, but the
 others computations, 1-p undistinguishable, are there to, and even for a
 slight interval of consciousness, we must take into account that we are in
 all of them, for the correct statistics. So the 1p is attached to an
 infinity of computation, once you attach it to just one computation.

Indeed, it is very counter intuitive and full of subtleties. I have
been lurking for a few years now and I am finding that only by
engaging with you and others on the list do I begin to comprehend the
subtleties.


 However I don't understand how Mary could have anything but a single
 continuation given the determinism of the sim. How could a
 counterfactual arise in this thought experiment? Can you give a
 concrete example?


 You should really find this by yourself, honestly. It is the only way to be
 really convinced. Normally this follows from the reasoning.
 Please ask if you don't find your error.
 Oh! I see Quentin found it.

 Your mistake consists in believing that when you simulate your friend Mary
 in the deterministic sim, completely closed, as you say, you have succeeded
 to prevent Mary, from her own pov, to escape  your simulation. Her
 1-indeterminacy remains unchanged, and bears on the many computations,
 existing by the + and * laws, or in the UD.

 The counterfactuals, and the indeterminacy comes from the existence of an
 infinity of computations generating Mary's state. Your deterministic sim can
 be runned a million times, it will not change Mary's indeterminacy,
 relatively to the infinities of diverging (infinite) computations going
 through her 1-state.

 You might also reason like that. The consciousness of Mary is only in
 Platonia. We have abandoned the idea that consciousness is related to any
 singular physical activity.

Here was the aha! moment. I get it now. Thanks to you and Quentin.
Even though I am well aware of the consequences of MGA, I was focusing
on the physical activity of the simulation because I was running
it.

The fascinating thing for me is, if instead of a scan of Mary, we run
an AGI that embodies a cognitive architecture that satisfies a theory
of consciousness (the kind of theory that explains why a particular UM
is conscious) so that if we assume the theory, it entails that the AGI
is conscious. The AGI will therefore have 1p indeterminacy even if the
sim is deterministic, for the same reason Mary does, because there are
an infinity of divergent computational paths that go through the AGI's
1p state in any given moment. Trippy!

 Her consciousness and other 1p-attributes
 depends only on her arithmetical relative state, relatively to the infinity
 of UMs running her in Platonia. In that sense, all the Mary you interact
 with are zombie, but this is just due to the trivial fact that you can
 interact only with Mary's body or local 3p description.

This I disagree with (or don't understand) because if we acknowledge
that as you said even just one emulation can be said involving
consciousness then interacting with even a single Mary is an
interaction with her soul in platonia. I think the admission of any
zombie in any context (assuming comp) is a refutation of comp.

Terren

 Once you grasp that
 you too are in Platonia, there is no more zombie because bodies become only
 local interface between soul in Platonia. But intuition fails us, and
 that's why we need the math and the computer science.

 The indeterminacy might be too big, and the comp counterfactuals might be
 too large, but that remains to be proved, and would be a refutation of comp
 (CTM, mechanism).

 Let me comment your last paragraphs (the entire post is below for ease)


 In the second scenario, her computational state is traced in the UD*
 and it is clear there is 1p indeterminacy, as the splitting entailed
 by the quantum number generator brings Mary along, so to speak.



Re: UD* and consciousness

2012-02-21 Thread meekerdb

On 2/21/2012 8:32 AM, Terren Suydam wrote:

Bruno and others,

Here's a thought experiment that for me casts doubt on the notion that
consciousness requires 1p indeterminacy.

Imagine that we have scanned my friend Mary so that we have a complete
functional description of her brain (down to some substitution level
that we are betting on). We run the scan in a simulated classical
physics. The simulation is completely closed, which is to say,
deterministic. In other words, we can run the simulation a million
times for a million years each time, and the state of all them will be
identical. Now, when we run the simulation, we can ask her (within the
context of the simulation) Are you conscious, Mary?  Are you aware of
your thoughts? She replies yes.

Next, we tweak the simulation in the following way. We plug in a
source of quantum randomness (random numbers from a quantum random
number generator) into a simulated water fountain. Now, the simulation
is no longer deterministic. A million runs of the simulation will
result in a million different computational states after a million
years. We ask the same questions of Mary and she replies yes.

In the deterministic scenario, Mary's computational state is traced an
infinite number of times in the UD*, but only because of the infinite
number of ways a particular computational state can be instantiated in
the UD* (different levels, UD implementing other UDs recursively,
iteration along the reals, etc). It's a stretch however to say that
there is 1p indeterminacy, because her computational state as
implemented in the simulation is deterministic.

In the second scenario, her computational state is traced in the UD*
and it is clear there is 1p indeterminacy, as the splitting entailed
by the quantum number generator brings Mary along, so to speak.

So if Mary is not conscious in the deterministic scenario, she is a
zombie. The only way to be consistent with this conclusion is to
insist that the substitution level must be at the quantum level.

If OTOH she is conscious, then consciousness does not require 1p indeterminacy.


But is it really either-or?  Isn't it likely there are different kinds and degrees of 
consciousness.  I'm not clear on what Bruno's theory says about this.  On the one hand he 
says all Lobian machines are (equally?) conscious, but then he says it depends on the 
program they are executing.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread Craig Weinberg
On Feb 21, 11:32 am, Terren Suydam terren.suy...@gmail.com wrote:


 So if Mary is not conscious in the deterministic scenario, she is a
 zombie. The only way to be consistent with this conclusion is to
 insist that the substitution level must be at the quantum level.

 If OTOH she is conscious, then consciousness does not require 1p 
 indeterminacy.


Or, there may be no substitution level at all, in which case the
deterministic simulation is a brain puppet, which responds 'yes' when
you pull the right string. For the other simulation, I'm not sure why
the quantum-random numbers wouldn't change 'Mary' enough to give
different answers. You have a brain puppet which is flipping
coins...what is the presumed effect of these flips?

If we consider instead that the brain (and all of physics) is more
like a mass-shadow of experienced events, then we can understand how
duplicating the shadow of a tree precisely doesn't render a living
tree as the result. To apply this metaphor to our reality, you would
have to turn it around to realize that in place of a tree and shadow,
there is a dialectic unity where Thesis = Figurative private
phenomenology (tree-like experience) and Antithesis = Literal public
empiricism (material tree).

Since the thesis is fundamental, any change to the antithesis will
simultaneously be changing the thesis, as the thesis is an
*experience* - a sensorimotive fugue. Emulating the antithesis
however, like trying to cast a shadow of a shadow, yields back only
universal generic defaults and not idiosyncratic identity grounded in
cohesive experience. There is no 'here' there. You have a hologram of
a human brain with no I associated with it.

The indeterminacy of 1p is caused by the authoritative authenticity of
the thesis, not by randomness. 1p awareness could even be
deterministic (and it probably is in matter below the cellular
threshold) but as awareness scales up through experience over
generations and lifetimes, it condenses as qualitative mass:
significance. This is figurative mass, not literal mass of a
pseudosubstance. It is 'importance', 'specialness', 'meaning',
'feeling', etc. If this signifying condensation is the thesis, we can
understand it by looking at the a-signifying antithesis of mass
through gravity and density. What happens to motive power and autonomy
under high gravity? It is crushed and absorbed into the collective
inertia. Separate bodies lose their power to escape the pull...they
fall. When this happens to us subjectively, our thesis falls as well -
asleep. We feel 'down'. We are 'crushed', depressed, deflated, low,
bummed, etc.

Because the thesis and antithesis are symmetrical however,
significance scales up as freedom, autonomy, high spirits, lifted
moods, grandeur, delusions of grandeur, mania, etc. As celebrity and
wealth are associated with super-power, freedom, and luxury, the
increased autonomy of living organisms is arrived at through
historical narrative. You cannot clone Beyonce and expect to make a
celebrity automatically. The celebrity-ness is not in her body
(although her body image is already part of a cultural narrative which
is being exalted at this time, so body similarity gives a head start).

What I'm getting at is that human consciousness is the latest chapter
in a long story of famous molecules that became famous cells who
became famous organisms. A simulation is a mere portrait of the fruits
of this fame. The costumes and scenery are there, but not the heroes
and heroines. The simulation is not from the right family, has not
attended the right schools, did not win American Idol. It isn't a who,
it is a pretender - a what. It has no why, only how.

Don't be fooled by the four dimensionality of matter's appearance. It
is still a shadow/antithesis of our perception of all perceptions of
it.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread Terren Suydam
On Tue, Feb 21, 2012 at 2:56 PM, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 21, 11:32 am, Terren Suydam terren.suy...@gmail.com wrote:

 So if Mary is not conscious in the deterministic scenario, she is a
 zombie. The only way to be consistent with this conclusion is to
 insist that the substitution level must be at the quantum level.

 If OTOH she is conscious, then consciousness does not require 1p 
 indeterminacy.


 Or, there may be no substitution level at all
snip
(I only included the relevant parts of your response)

My thought experiment assumes comp.

T

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread Terren Suydam
On Tue, Feb 21, 2012 at 12:05 PM, meekerdb meeke...@verizon.net wrote:
 On 2/21/2012 8:32 AM, Terren Suydam wrote:
 So if Mary is not conscious in the deterministic scenario, she is a
 zombie. The only way to be consistent with this conclusion is to
 insist that the substitution level must be at the quantum level.

 If OTOH she is conscious, then consciousness does not require 1p
 indeterminacy.


 But is it really either-or?  Isn't it likely there are different kinds and
 degrees of consciousness.  I'm not clear on what Bruno's theory says about
 this.  On the one hand he says all Lobian machines are (equally?) conscious,
 but then he says it depends on the program they are executing.

 Brent

I'm not too keen on 'partial zombies'. Partial zombies admit full
zombies, as far as I'm concerned.

The idea that consciousness depends on the program a UM executes is
the point of this thought experiment. The idea that consciousness
itself depends on a multiplicity of computational paths going through
the current computational state is what I'm questioning.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread meekerdb

On 2/21/2012 12:34 PM, Terren Suydam wrote:

On Tue, Feb 21, 2012 at 12:05 PM, meekerdbmeeke...@verizon.net  wrote:

On 2/21/2012 8:32 AM, Terren Suydam wrote:

So if Mary is not conscious in the deterministic scenario, she is a
zombie. The only way to be consistent with this conclusion is to
insist that the substitution level must be at the quantum level.

If OTOH she is conscious, then consciousness does not require 1p
indeterminacy.


But is it really either-or?  Isn't it likely there are different kinds and
degrees of consciousness.  I'm not clear on what Bruno's theory says about
this.  On the one hand he says all Lobian machines are (equally?) conscious,
but then he says it depends on the program they are executing.

Brent

I'm not too keen on 'partial zombies'. Partial zombies admit full
zombies, as far as I'm concerned.


When I refer to degrees of consciousness I'm not talking about partial zombies (beings 
that act exactly like humans but are not fully conscious).  I'm talking about dogs and 
chimpanzees and Watson and spiders.




The idea that consciousness depends on the program a UM executes is
the point of this thought experiment. The idea that consciousness
itself depends on a multiplicity of computational paths going through
the current computational state is what I'm questioning.


Yes, I think that's a dubious proposition.  Although brains no doubt have some degree of 
inherent quantum randomness it's clear that intelligent behavior need not depend on that.


But I'm not sure your thought experiment proves its point.  It's about simulated Mary.  
Suppose consciousness depended on quantum entanglements of brain structures with the 
environment (and they must in order for the brain to quasi-classical).  Then in your 
simulation Mary would be a zombie (because your computation is purely classical and you're 
not simulating the quantum entanglements).  But an actual macroscopic device substituted 
for part of real Mary's brain would be quantum entangled with the environment even if were 
at the neuron level.  So consciousness would, ex hypothesi, still occur - although it 
might be different in some way.


Brent



Terren



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread Terren Suydam
On Tue, Feb 21, 2012 at 4:01 PM, meekerdb meeke...@verizon.net wrote:
 On 2/21/2012 12:34 PM, Terren Suydam wrote:
 The idea that consciousness depends on the program a UM executes is
 the point of this thought experiment. The idea that consciousness
 itself depends on a multiplicity of computational paths going through
 the current computational state is what I'm questioning.


 Yes, I think that's a dubious proposition.  Although brains no doubt have
 some degree of inherent quantum randomness it's clear that intelligent
 behavior need not depend on that.

 But I'm not sure your thought experiment proves its point.  It's about
 simulated Mary.  Suppose consciousness depended on quantum entanglements of
 brain structures with the environment (and they must in order for the brain
 to quasi-classical).  Then in your simulation Mary would be a zombie
 (because your computation is purely classical and you're not simulating the
 quantum entanglements).  But an actual macroscopic device substituted for
 part of real Mary's brain would be quantum entangled with the environment
 even if were at the neuron level.  So consciousness would, ex hypothesi,
 still occur - although it might be different in some way.

 Brent

Why must consciousness depend on quantum entanglements for the brain
to be quasi-classical?

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread meekerdb

On 2/21/2012 2:45 PM, Terren Suydam wrote:

On Tue, Feb 21, 2012 at 4:01 PM, meekerdbmeeke...@verizon.net  wrote:

On 2/21/2012 12:34 PM, Terren Suydam wrote:

The idea that consciousness depends on the program a UM executes is
the point of this thought experiment. The idea that consciousness
itself depends on a multiplicity of computational paths going through
the current computational state is what I'm questioning.


Yes, I think that's a dubious proposition.  Although brains no doubt have
some degree of inherent quantum randomness it's clear that intelligent
behavior need not depend on that.

But I'm not sure your thought experiment proves its point.  It's about
simulated Mary.  Suppose consciousness depended on quantum entanglements of
brain structures with the environment (and they must in order for the brain
to quasi-classical).  Then in your simulation Mary would be a zombie
(because your computation is purely classical and you're not simulating the
quantum entanglements).  But an actual macroscopic device substituted for
part of real Mary's brain would be quantum entangled with the environment
even if were at the neuron level.  So consciousness would, ex hypothesi,
still occur - although it might be different in some way.

Brent

Why must consciousness depend on quantum entanglements for the brain
to be quasi-classical?


The best theory of how the (quasi) classical world arises from the underlying quantum 
world depends on decoherence, i.e. macroscopic things appear classical because they are 
entangled with the environment which makes a few variables, like position and momentum, 
quasi-classical (c.f. Zurek or Schlosshauer).  If a thing is isolated from the environment 
it may be able to exist in a superposition of states, i.e. be non-classical; although 
internal degrees of freedom might also produce quasi-classical dynamics.


Brent



Terren



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: UD* and consciousness

2012-02-21 Thread Terren Suydam
On Tue, Feb 21, 2012 at 7:47 PM, meekerdb meeke...@verizon.net wrote:
 On 2/21/2012 2:45 PM, Terren Suydam wrote:

 On Tue, Feb 21, 2012 at 4:01 PM, meekerdbmeeke...@verizon.net  wrote:

 On 2/21/2012 12:34 PM, Terren Suydam wrote:

 The idea that consciousness depends on the program a UM executes is
 the point of this thought experiment. The idea that consciousness
 itself depends on a multiplicity of computational paths going through
 the current computational state is what I'm questioning.


 Yes, I think that's a dubious proposition.  Although brains no doubt have
 some degree of inherent quantum randomness it's clear that intelligent
 behavior need not depend on that.

 But I'm not sure your thought experiment proves its point.  It's about
 simulated Mary.  Suppose consciousness depended on quantum entanglements
 of
 brain structures with the environment (and they must in order for the
 brain
 to quasi-classical).  Then in your simulation Mary would be a zombie
 (because your computation is purely classical and you're not simulating
 the
 quantum entanglements).  But an actual macroscopic device substituted for
 part of real Mary's brain would be quantum entangled with the environment
 even if were at the neuron level.  So consciousness would, ex hypothesi,
 still occur - although it might be different in some way.

 Brent

 Why must consciousness depend on quantum entanglements for the brain
 to be quasi-classical?


 The best theory of how the (quasi) classical world arises from the
 underlying quantum world depends on decoherence, i.e. macroscopic things
 appear classical because they are entangled with the environment which makes
 a few variables, like position and momentum, quasi-classical (c.f. Zurek or
 Schlosshauer).  If a thing is isolated from the environment it may be able
 to exist in a superposition of states, i.e. be non-classical; although
 internal degrees of freedom might also produce quasi-classical dynamics.

OK, but that assumes more than is necessary for the argument. I don't
think Bruno's theory demands an account of how the classical arises
from the quantum. The brain (or its functional equivalent) just
implements computations at or above some substitution level we are
willing to bet on... whether they are entangled with a level lower
than the substitution level is irrelevant.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.