Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Bruno Marchal wrote:
> ...
>
> (*) Once and for all, when I say I am a modal realist, I really mean  
> this "I have an argument showing that the comp theory imposes modal  
> realism".  I am really not defending any theory. I am just showing  
> that the comp theory leads to precise and verifiable/refutable facts.  
> I am a logician: all what I show to people is that IF you believe this  
> THEN you have to believe that. It is part of my personal religion that  
> my personal religion is personal and private (and evolvable).
>   

I understand that.  And just so I'm not misunderstood, when I refer to 
"your theory" I don't mean to imply that it is something you believe. I 
just mean the theory that you have elucidated.  I understand that you 
could put forth several different theories and believe any of them.

Brent
"Nobody believes a theory except the guy who thought of it.  Everybody 
believes an experiment except the guy who did it."
 --- Leon Lederman

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Kelly wrote:
>
> On May 23, 12:54 pm, Brent Meeker  wrote:
>   
>> Either of these ideas is definite
>> enough that they could actually be implemented (in contrast to many
>> philosophical ideas about consciousness).
>> 
>
> Once you had implemented the ideas, how would you then know whether
> consciousness experience had actually been produced, as opposed to the
> mere appearance of it?
>
> If you don't have a way of definitively detecting the hoped for result
> of consciousness, then how exactly does being "implementable" really
> help?  You run your test...and then what?

It's no different than any theory (including yours).  You draw some 
conclusions about what should happen if it's correct, you try it and you 
see if your predictions work out.  If I program/build my robot a certain 
way will it seem as conscious as a dog or a chimpanzee or a human?  Can 
I adjust my design to match any of those?  Can I change my brain in a 
certain way and change my experienced consciousness in a predictable 
way.   If so, I place some credence in my theory of consciousness.  If 
not - it's back to the drawing board.  Many things are not observed 
directly.  No theory is certain; it may be true but we can never be 
certain it's true.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


OK. So, now, Kelly, just to understand what you mean by your theory, I  
have to ask you what your theory predicts in case of self- 
multiplication.
You have to see that, personally, I don't have a theory other than the  
assumption that the brain is emulable by a Turing machine, and by  
brain I mean any portion of my local neighborhood needed for surviving  
the comp functional substitution. This is the comp hypothesis.

Because we are both modal realist(*), and, true, worlds (histories)  
with white rabbit exists, and from inside are as actual as our present  
state. But then, I say that, as a consequence of the comp hyp, there  
is a relative probability or credibility measure on those histories.  
To see where does those probabilities come from, you have to  
understand that 1) you can be multiplied (that is read, copy (cut) and  
pasted in Washington AND Moscow (say)), and 2) you are multiplied (by  
2^aleph_zero, at each instant, with a comp definition of instant not  
related in principle with any form of physical time).

What does your theory predicts concerning your expectation in such an  
experience/experiment.

The fact is that your explanation, that we are in an typical universe,  
because those exist as well, just does not work with the comp hyp. It  
does not work, because it does not explain why we REMAIN in that  
typical worlds. It seems to me that, as far as I can put meaning on  
your view, the probability I will see a white rabbit in two seconds is  
as great than the probability I will see anything else, and this is in  
contradiction with the fact. What makes us staying in apparent lawful  
histories?

What does you theory predict about agony and death, from the first  
person point of view? This is an extreme case where comp is sensibly  
in opposition with "Aristotelian naturalism".

May be you could study the UDA, and directly tell me at which step  
your "theory" departs from the comp hyp. It has to depart, because you  
say below that we are in a quantum reality by chance, where the comp  
hyp explains why we have to be (even after death) in a quantum reality.

Bruno

(*) Once and for all, when I say I am a modal realist, I really mean  
this "I have an argument showing that the comp theory imposes modal  
realism".  I am really not defending any theory. I am just showing  
that the comp theory leads to precise and verifiable/refutable facts.  
I am a logician: all what I show to people is that IF you believe this  
THEN you have to believe that. It is part of my personal religion that  
my personal religion is personal and private (and evolvable).



On 23 May 2009, at 23:56, Kelly Harmon wrote:

>
> On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal   
> wrote:
>>
>>
>>> To repeat my
>>> earlier Chalmers quote, "Experience is information from the inside;
>>> physics is information from the outside."  It is this subjective
>>> experience of information that provides meaning to the otherwise
>>> completely abstract "platonic" symbols.
>>
>>
>> I insist on this well before Chalmers. We are agreeing on this.
>> But then you associate consciousness with the experience of  
>> information.
>> This is what I told you. I can understand the relation between
>> consciousness and information content.
>
> Information.  Information content.  H.  Well, I'm not entirely
> sure what you're saying here.  Maybe I don't have a problem with this,
> but maybe I do.  Maybe we're really saying the same thing here, but
> maybe we're not.  Hm.
>
>
>>> Note that I don't have Bruno's fear of white rabbits.
>>
>> Then you disagree with all reader of David Lewis, including David
>> lewis himself who recognizes this inflation of to many realities as a
>> weakness of its modal realism. My point is that the comp constraints
>> leads to a solution of that problem, indeed a solution close to the
>> quantum Everett solution. But the existence of white rabbits, and  
>> thus
>> the correctness of comp remains to be tested.
>
> True, Lewis apparently saw it as a cost, BUT not so high a cost as to
> abandon modal realism.  I don't even see it as a high cost, I see it
> as a logical consequence.  Again, it's easy to imagine a computer
> simulation/virtual reality in which a conscious observer would see
> disembodied talking heads and flying pigs.  So it certainly seems
> possible for a conscious being to be in a state of observing an
> unattached talking head.
>
> Given that it's possible, why wouldn't it be actual?
>
> The only reason to think that it wouldn't be actual is that our
> external objectively existing physical universe doesn't have physical
> laws that can lead easily to the existance of such talking heads to be
> observed.  But once you've abandoned the external universe and
> embraced platonism, then where does the constraint against observing
> talking heads come from?
>
> Assuming platonism, I can explain why "I" don't see talking heads:
> because every possible Kelly is realized, and th

Re: Consciousness is information?

2009-05-23 Thread Kelly



On May 23, 12:54 pm, Brent Meeker  wrote:
>
> Either of these ideas is definite
> enough that they could actually be implemented (in contrast to many
> philosophical ideas about consciousness).

Once you had implemented the ideas, how would you then know whether
consciousness experience had actually been produced, as opposed to the
mere appearance of it?

If you don't have a way of definitively detecting the hoped for result
of consciousness, then how exactly does being "implementable" really
help?  You run your test...and then what?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal  wrote:
>
>
>> To repeat my
>> earlier Chalmers quote, "Experience is information from the inside;
>> physics is information from the outside."  It is this subjective
>> experience of information that provides meaning to the otherwise
>> completely abstract "platonic" symbols.
>
>
> I insist on this well before Chalmers. We are agreeing on this.
> But then you associate consciousness with the experience of information.
> This is what I told you. I can understand the relation between
> consciousness and information content.

Information.  Information content.  H.  Well, I'm not entirely
sure what you're saying here.  Maybe I don't have a problem with this,
but maybe I do.  Maybe we're really saying the same thing here, but
maybe we're not.  Hm.


>> Note that I don't have Bruno's fear of white rabbits.
>
> Then you disagree with all reader of David Lewis, including David
> lewis himself who recognizes this inflation of to many realities as a
> weakness of its modal realism. My point is that the comp constraints
> leads to a solution of that problem, indeed a solution close to the
> quantum Everett solution. But the existence of white rabbits, and thus
> the correctness of comp remains to be tested.

True, Lewis apparently saw it as a cost, BUT not so high a cost as to
abandon modal realism.  I don't even see it as a high cost, I see it
as a logical consequence.  Again, it's easy to imagine a computer
simulation/virtual reality in which a conscious observer would see
disembodied talking heads and flying pigs.  So it certainly seems
possible for a conscious being to be in a state of observing an
unattached talking head.

Given that it's possible, why wouldn't it be actual?

The only reason to think that it wouldn't be actual is that our
external objectively existing physical universe doesn't have physical
laws that can lead easily to the existance of such talking heads to be
observed.  But once you've abandoned the external universe and
embraced platonism, then where does the constraint against observing
talking heads come from?

Assuming platonism, I can explain why "I" don't see talking heads:
because every possible Kelly is realized, and that includes a Kelly
who doesn't observe disembodied talking heads and who doesn't know
anyone who has ever seen such a head.

So given that my observations aren't in conflict with my theory, I
don't see a problem.  The fact that nothing that I could observe would
ever conflict with my theory is also not particularly troubling to me
because I didn't arrive at my theory as means of explaining any
particular observed fact about the external universe.

My theory isn't intended to explain the contingent details of what I
observe.  It's intended to explain the fact THAT I subjectively
observe anything at all.

Given that it seems theoretically possible to create a computer
simulation that would manifest any imaginable conscious being
observing any imaginable "world", including schizophrenic beings
observing psychodelic realities, I don't see why you are trying to
constrain the platonic realities that can be experienced to those that
are extremely similar to ours.


> It is just a question of testing a theory. You seem to say something
> like "if the theory predict that water under fire will typically boil,
> and that experience does not confirm that typicality (water froze
> regularly) then it means we are just very unlucky". But then all
> theories are correct.

I say there is no water.  There is just our subjective experience of
observing water.  Trying to constrain a Platonic theory of
consciousness so that it matches a particular observed physical
reality seems like a mistake to me.

Is there a limit to what we could experience in a computer simulated
reality?  If not, why would there be a limit to what we could
experience in Platonia?


>> The double-aspect principle stems from the observation that there is a
>> direct isomorphism between certain physically embodied information
>> spaces and certain phenomenal (or experiential) information spaces.
>
> This can be shown false in Quantum theory without collapse, and more
> easily with the comp assumption.
> No problem if you tell me that you reject both Everett and comp.
> Chalmers seems in some place to accept both Everett and comp, indeed.
> He explains to me that he stops at step 3. He believes that after a
> duplication you feel to be simultaneously at the both place, even
> assuming comp. I think and can argue that this is non sense. Nobody
> defends this on the list. Are you defending an idea like that?

I included the Chalmers quote because I think it provides a good image
of how abstract information seems to supervene on physical systems.
BUT by quoting the passage I'm not saying that I think that this
appearance of supervenience is the source of consciousness.  I still
buy into the putnam mapping view that there is no 1-to-1 mapping from
information or com

Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 18:54, Brent Meeker wrote:


>
> I think it is related.  I'm just trying to figure out the implications
> of your theory for the problem of creating artificial, conscious
> intelligences. What I gather from the above is that you think there  
> are
> degrees of consciousness marked by the ability to prove things.


Hmm ... It is more a degree of self-reflexivity, or a degree of  
introspective ability. RA, although universal (in the Church Turing  
thesis sense) is a *very* weak theorem prover. RA is quite limited in  
its introspection abilities. I am open to the idea that RA could be  
conscious, but the interview does not lead to a theory of consciousness.
It is not a lobian machine like PA (= RA + induction). Lobianity  
begins with weaker theory than PA though, somewhere between RA and PA,  
and Lobianity is persistant, it concerns all sound extensions of PA,  
even hyperturing extension actually.

Also, I don't think I have a theory. I work in a very old theory:  
mechanism. It is not mine, and I use it because it makes possible to  
use computer science to prove things. Enough things to show mechanism  
empirically refutable.
For AUDA you need to accept the Theatetical approach to knowledge, all  
right.

I recall that in Smullyan "Forever Undecided", which introduces to the  
logic of self-reference G, a nice hierarchy of reasoners is displayed  
up to the Lobian machine.



>  To
> consider another view, for example, John McCarthy thinks there are
> degrees of consciousness marked by having narratives created and
> remembered and meta-narratives.  Either of these ideas is definite
> enough that they could actually be implemented (in contrast to many
> philosophical ideas about consciousness).

It is not bad. PA has the meta-narrative ability, and RA lacks it. You  
can see this in that way.



> I have some reservation
> about your idea because I know many people that I think are conscious
> but who couldn't prove even the simplest theorem in PA.

Because they lack the familiarity with the notations, or they have  
some math trauma, or because they are impatient or not interested. But  
all human beings, if you motivate them and give them time, can prove  
all theorems of PA, and, more importantly believe the truth of those  
theorems.

I have to add this last close, because even RA can prove all theorems  
of PA, given that RA is turing universal. But RA, without becoming PA,  
cannot really understand the proofs, like the guy in the chinese room  
can talk chinese, yet cannot understand its talk. It is the place  
where people easily make a confusion of level similar to Searle  
confusion (described by Dennett and Hofstadter). I can simulate  
Einstein's brain, but this does not make me Einstein. On the contrary  
this makes possible to discuss with Einstein. It is in that sense that  
RA can simulate PA without becoming PA. Likewise, all theories can  
simulate all effective theories. PA is probably still very simple  
compared to any human, except highly mentally disabled person or  
person in comatose state of course.




> Are we to
> suppose they just have a qualitatively different kind of  
> consciousness?

I don't think so, but in the entheogen forums people can discuss at  
infinitum if under such or such plants people experience a  
qualitatively different kind of consciousness. Given the hardness to  
just discuss on consciousness you can understand that this is a bit of  
a premature question.

Many estimate that to be conscious is always to be conscious of some  
qualia. In that case I could argue that even "me today" has already a  
qualitatively different kind of consciousness compared with "me  
yesterday".  Now, my opinion (which plays no role in the UDA- 
reasoning) is that consciousness can be qualia independent, and is  
something qualitatively stable, as opposed to the content of  
consciousness, which can vary a lot.

Now, if you compare RA (non lobian) and PA (lobian), then it is far  
more possible that they have a different kind of consciousness, and  
even lives in a different kind of physics, as a consequence. RA could  
be closer to a "universal consciousness notion". It would mean that PA  
could already be under some illusions ...
I don't know. Real hard questions here.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Bruno Marchal wrote:
> On 23 May 2009, at 09:08, Brent Meeker wrote:
>
>
>   
>> But why?  Why not RA without induction?  Is it necessary that there be
>> infinite schema?  Since you phrase your answer as "I am willing..." is
>> it a matter of your intuition or is it a matter of "degree" of
>> consciousness.
>> 
>
>
> OK. I could have taken RA. But without the induction axioms, RA is  
> very poor in provability abilities, it has the consciousness of a low  
> animals, if you want. Its provability logic is very weak with respect  
> to self-reference. It cannot prove the arithmetical formula Bp -> BBp  
> for any arithmetical p. So it is not even a type 4 reasoner (cf  
> Smullyan's Forever Undecided, see my posts on FU), and it cannot know  
> its own incompleteness. But it can be considered as conscious. It is  
> not self-conscious, like the Lobian machine.
>
> Note that Bp -> BBp is true *for* RA, but it is not provable *by* RA.
> Bp -> BBp is true for and provable by PA. Smullyan says that PA, or  
> any G reasoner, is self-aware.
>
> Of course, consciousness (modeled by consistency) is true for PA and  
> RA, and not provable neither by RA nor PA (incompleteness).
>
> But all this is not related to the problem you were talking about,  
> which I still don't understand.
>
> Bruno

I think it is related.  I'm just trying to figure out the implications 
of your theory for the problem of creating artificial, conscious 
intelligences. What I gather from the above is that you think there are 
degrees of consciousness marked by the ability to prove things.  To 
consider another view, for example, John McCarthy thinks there are 
degrees of consciousness marked by having narratives created and 
remembered and meta-narratives.  Either of these ideas is definite 
enough that they could actually be implemented (in contrast to many 
philosophical ideas about consciousness).   I have some reservation 
about your idea because I know many people that I think are conscious 
but who couldn't prove even the simplest theorem in PA.  Are we to 
suppose they just have a qualitatively different kind of consciousness?

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 09:35, Kelly Harmon wrote:

>
> Okay, below are three passages that I think give a good sense of what
> I mean by "information" when I say that "consciousness is
> information".  The first is from David Chalmers' "Facing up to the
> Problem of Consciousness."  The second is from the SEP article on
> "Semantic Conceptions of Information", and the third is from "Symbol
> Grounding and Meaning:  A comparison of High-Dimensional and Embodied
> Theories of Meaning", by Arthur Glenberg and David Robertson.
>
> So I'm looking at these largely from a static, timeless, platonic
> view.

We agree then. Assuming comp we have no choice in the matter here.




> In my view, there are ungrounded abstract symbols that acquire
> meaning via constraints placed on them by their relationships to other
> symbols.

Absolutely so.




>  The only "grounding" comes from the conscious experience
> that is intrinsic to a particular set of relationships.


Exactly.



> To repeat my
> earlier Chalmers quote, "Experience is information from the inside;
> physics is information from the outside."  It is this subjective
> experience of information that provides meaning to the otherwise
> completely abstract "platonic" symbols.


I insist on this well before Chalmers. We are agreeing on this.
But then you associate consciousness with the experience of information.
This is what I told you. I can understand the relation between  
consciousness and information content.



>
>
> So I think that something like David Lewis' "modal realism" is true by
> virtue of the fact that all possible sets of relationships are
> realized in Platonia.


We agree. This is explained in detail in "conscience et mécanisme".  
Comp forces modal realism. AUDA just gives the precise modal logics,  
extracted from the theory of the self-referentially correct machine.



>
>
> Note that I don't have Bruno's fear of white rabbits.


Then you disagree with all reader of David Lewis, including David  
lewis himself who recognizes this inflation of to many realities as a  
weakness of its modal realism. My point is that the comp constraints  
leads to a solution of that problem, indeed a solution close to the  
quantum Everett solution. But the existence of white rabbits, and thus  
the correctness of comp remains to be tested.




> Assuming that
> we are typical observers is fine as a starting point, and is a good
> way to choose between otherwise equivalent explanations, but I don't
> think it should hold a unilateral veto over our final conclusions.  If
> the most reasonable explanation says that our observations aren't
> especially typical, then so be it.  Not everyone can be typical.

It is just a question of testing a theory. You seem to say something  
like "if the theory predict that water under fire will typically boil,  
and that experience does not confirm that typicality (water froze  
regularly) then it means we are just very unlucky". But then all  
theories are correct.



>
>
> I think the final passage from Glenberg and Robertson (from a paper
> that actually argues against what's being described) gives the best
> sense of what I have in mind, though obviously I'm extrapolating out
> quite abit from the ideas presented.
>
> Okay, so the passages of interest:
>
> --
>
> David Chalmers:
>
> The basic principle that I suggest centrally involves the notion of
> information. I understand information in more or less the sense of
> Shannon (1948). Where there is information, there are information
> states embedded in an information space. An information space has a
> basic structure of difference relations between its elements,
> characterizing the ways in which different elements in a space are
> similar or different, possibly in complex ways. An information space
> is an abstract object, but following Shannon we can see information as
> physically embodied when there is a space of distinct physical states,
> the differences between which can be transmitted down some causal
> pathway. The states that are transmitted can be seen as themselves
> constituting an information space. To borrow a phrase from Bateson
> (1972), physical information is a difference that makes a difference.
>
> The double-aspect principle stems from the observation that there is a
> direct isomorphism between certain physically embodied information
> spaces and certain phenomenal (or experiential) information spaces.

This can be shown false in Quantum theory without collapse, and more  
easily with the comp assumption.
No problem if you tell me that you reject both Everett and comp.  
Chalmers seems in some place to accept both Everett and comp, indeed.  
He explains to me that he stops at step 3. He believes that after a  
duplication you feel to be simultaneously at the both place, even  
assuming comp. I think and can argue that this is non sense. Nobody  
defends this on the list. Are you defending an idea like that?



>
> From the same sort of observations

Re: Consciousness is information?

2009-05-23 Thread John Mikes
I missed the meaning of *'conscious'* as applied in this discussion. *If we
accept* that it means 'responding to information' ( used in the wides sense:
in *responding* there is an *absorption* of the result of an observer
moment and *completenig relations thereof* and te *information* as the
*absorbed
relations*) *then a thermostat is conscious*.
Without such clarification Jason's question is elusive. (I may question the
term "physical universe" as well - as the compilation of aspect-slanted
figments to explain observations we made in select views by select means
(cf. conventional and not-so-conventional science, numbers, Platonist
filters, quantum considerations, theological views, etc.)
*
Then Bruno's response below refers to a *fetish* (person? what is this?) -
definitely NOT a computer, but "relative to* ANOTHER(?)* computer". *The
'another' points to similarity.*
 It also reverberates with Jason's "*WE*(??)" (Is this 'a person', a
homunculus, or what?)create a computer further *segregating* the 'fetish'
Bruno refers to from 'a computer'.
*I don't find it ambiguous: I find it undefined terms clashing in elusive
meanings.*

Another open spot is the 'conscious robot' that would not become conscious
even by copying someone's BRAIN (which is NOT conscious! - as said).
We still face the "I", the "ME" *UFO* (considered as 'self'') that DOES but
IS NOT. - And - is conscious. Whatever that may mean.

Then comes Brent with the reasonable question. I would add: what is
necessary for a 'computation in Platonia' to become a person? should it pee?
I feel the term Brent asked is still a select artifact ideation, APPLICABLE
(maybe) to non-computational domains to make it "a person" (whatever that
may be). It is still not "I", the conscious, thinking of it.
The 'conscious' ME is different from a computation with denied consciousness
- as I read.
Replacing the (non-conscious) brain with identical other parts does not
impart the missing conscious quality - unless the replacement IS conscious,
in which case it is NOT a replacement. It is a "exchange to...". - as Brent
correctly points to.  (Leaving open the term 'you - conscious' as a deus ex
machina quale-addition for the replacement).

Just looking through differently colored goggles.

John Mikes






On Sat, May 23, 2009 at 12:39 AM, Brent Meeker wrote:

>
> Bruno Marchal wrote:
> > On 22 May 2009, at 18:25, Jason Resch wrote:
> >
> > ...
> >> Do you believe if we create a computer in this physical
> >> universe that it could be made conscious,
> >>
> >
> > But a computer is never conscious, nor is a brain. Only a person is
> > conscious, and a computer or a brain can only make it possible for a
> > person to be conscious relatively to another computer. So your
> > question is ambiguous.
> > It is not my brain which is conscious, it is me who is conscious.
>
> By "me" do you mean some computation in Platonia?  I'm wondering what
> are the implications of your theory for creating "artificial"
> consciousness.  Since comp starts with the assumption that replacing
> one's brain with functionally  identical units (at some level of detail)
> will make no discernable difference in your experience, it entails that
> a computer that functionally replaces your brain is conscious (conscious
> of being you in fact).  So if I want to build a conscious robot from
> scratch, not by copying someone's brain, what must I do?
>
> Brent
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 09:08, Brent Meeker wrote:


>>
> But why?  Why not RA without induction?  Is it necessary that there be
> infinite schema?  Since you phrase your answer as "I am willing..." is
> it a matter of your intuition or is it a matter of "degree" of
> consciousness.


OK. I could have taken RA. But without the induction axioms, RA is  
very poor in provability abilities, it has the consciousness of a low  
animals, if you want. Its provability logic is very weak with respect  
to self-reference. It cannot prove the arithmetical formula Bp -> BBp  
for any arithmetical p. So it is not even a type 4 reasoner (cf  
Smullyan's Forever Undecided, see my posts on FU), and it cannot know  
its own incompleteness. But it can be considered as conscious. It is  
not self-conscious, like the Lobian machine.

Note that Bp -> BBp is true *for* RA, but it is not provable *by* RA.
Bp -> BBp is true for and provable by PA. Smullyan says that PA, or  
any G reasoner, is self-aware.

Of course, consciousness (modeled by consistency) is true for PA and  
RA, and not provable neither by RA nor PA (incompleteness).

But all this is not related to the problem you were talking about,  
which I still don't understand.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

Okay, below are three passages that I think give a good sense of what
I mean by "information" when I say that "consciousness is
information".  The first is from David Chalmers' "Facing up to the
Problem of Consciousness."  The second is from the SEP article on
"Semantic Conceptions of Information", and the third is from "Symbol
Grounding and Meaning:  A comparison of High-Dimensional and Embodied
Theories of Meaning", by Arthur Glenberg and David Robertson.

So I'm looking at these largely from a static, timeless, platonic
view.  In my view, there are ungrounded abstract symbols that acquire
meaning via constraints placed on them by their relationships to other
symbols.  The only "grounding" comes from the conscious experience
that is intrinsic to a particular set of relationships.  To repeat my
earlier Chalmers quote, "Experience is information from the inside;
physics is information from the outside."  It is this subjective
experience of information that provides meaning to the otherwise
completely abstract "platonic" symbols.

So I think that something like David Lewis' "modal realism" is true by
virtue of the fact that all possible sets of relationships are
realized in Platonia.

Note that I don't have Bruno's fear of white rabbits.  Assuming that
we are typical observers is fine as a starting point, and is a good
way to choose between otherwise equivalent explanations, but I don't
think it should hold a unilateral veto over our final conclusions.  If
the most reasonable explanation says that our observations aren't
especially typical, then so be it.  Not everyone can be typical.

I think the final passage from Glenberg and Robertson (from a paper
that actually argues against what's being described) gives the best
sense of what I have in mind, though obviously I'm extrapolating out
quite abit from the ideas presented.

Okay, so the passages of interest:

--

David Chalmers:

The basic principle that I suggest centrally involves the notion of
information. I understand information in more or less the sense of
Shannon (1948). Where there is information, there are information
states embedded in an information space. An information space has a
basic structure of difference relations between its elements,
characterizing the ways in which different elements in a space are
similar or different, possibly in complex ways. An information space
is an abstract object, but following Shannon we can see information as
physically embodied when there is a space of distinct physical states,
the differences between which can be transmitted down some causal
pathway. The states that are transmitted can be seen as themselves
constituting an information space. To borrow a phrase from Bateson
(1972), physical information is a difference that makes a difference.

The double-aspect principle stems from the observation that there is a
direct isomorphism between certain physically embodied information
spaces and certain phenomenal (or experiential) information spaces.
>From the same sort of observations that went into the principle of
structural coherence, we can note that the differences between
phenomenal states have a structure that corresponds directly to the
differences embedded in physical processes; in particular, to those
differences that make a difference down certain causal pathways
implicated in global availability and control. That is, we can find
the same abstract information space embedded in physical processing
and in conscious experience.

--

SEP:

Information cannot be dataless but, in the simplest case, it can
consist of a single datum.  A datum is reducible to just a lack of
uniformity (diaphora is the Greek word for “difference”), so a general
definition of a datum is:

The Diaphoric Definition of Data (DDD):

A datum is a putative fact regarding some difference or lack of
uniformity within some context.  [In particular data as diaphora de
dicto, that is, lack of uniformity between two symbols, for example
the letters A and B in the Latin alphabet.]

--

Glenberg and Robertson:

Meaning arises from the syntactic combination of abstract, amodal
symbols that are arbitrarily related to what they signify.  A new form
of the abstract symbol approach to meaning affords the opportunity to
examine its adequacy as a psychological theory of meaning.  This form
is represented by two theories of linguistic meaning (that is, the
meaning of words, sentences, and discourses), both of which take
advantage of the mathematics of high-dimensional spaces. The
Hyperspace Analogue to Language (HAL; Burgess & Lund, 1997) posits
that the meaning of a word is its vector representation in a space
based on 140,000 word–word co-occurrences. Latent Semantic Analysis
(LSA; Landauer & Dumais, 1997) posits that the meaning of a word is
its vector representation in a space with approximately 300 dimensions
derived from a space with many more dimensions. The vector elements
found in both theories are just the sort of abstract features that ar

Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Bruno Marchal wrote:
> On 23 May 2009, at 06:39, Brent Meeker wrote:
>
>   
>> Bruno Marchal wrote:
>> 
>>> On 22 May 2009, at 18:25, Jason Resch wrote:
>>>
>>> ...
>>>   
 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

 
>>> But a computer is never conscious, nor is a brain. Only a person is
>>> conscious, and a computer or a brain can only make it possible for a
>>> person to be conscious relatively to another computer. So your
>>> question is ambiguous.
>>> It is not my brain which is conscious, it is me who is conscious.
>>>   
>> By "me" do you mean some computation in Platonia?  I'm wondering what
>> are the implications of your theory for creating "artificial"
>> consciousness.  Since comp starts with the assumption that replacing
>> one's brain with functionally  identical units (at some level of  
>> detail)
>> will make no discernable difference in your experience, it entails  
>> that
>> a computer that functionally replaces your brain is conscious  
>> (conscious
>> of being you in fact).  So if I want to build a conscious robot from
>> scratch, not by copying someone's brain, what must I do?
>> 
>
>
> I don't see the problem, besides the obvious and usual difficulties of  
> artificial intelligence.
> Actually if you implement a theorem prover for Peano Arithmetic (=  
> Robinson Arithmetic + the induction axioms) I am willing to say that  
> you have build a conscious entity.
>   
But why?  Why not RA without induction?  Is it necessary that there be 
infinite schema?  Since you phrase your answer as "I am willing..." is 
it a matter of your intuition or is it a matter of "degree" of 
consciousness.

Brent


> It is the entity that I interview (thanks to the work of Gödel, Löb  
> and Solovay).
> The person related to it, which I identify with the knower (obeying to  
> the theaetetical logic of "provable(p) & p")
> exist simultaneously in all the possible relative implementations of  
> it in platonia or in UD* (the universal deployment).
> I mean it is the same for a copy of me, or an intelligent robot build  
> from scratch. Both "person" exist in an atemporal and aspatial ways in  
> Platonia, and will appear concrete to any entity belonging to some  
> computation where they can manifest themselves.
> Like numbers. 17 exists in Platonia, but 17 has multiple  
> implementation in many computations in Platonia.
>
> I guess I miss something because I don't see any problem here. You may  
> elaborate perhaps. We are in the seven step here. Are you sure you  
> grasp the six preceding steps?
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
> >
>
>   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---