Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Jason Resch
On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg wrote:

> On Jan 14, 4:41 pm, Jason Resch  wrote:
> > On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg  >wrote:
> >
> > > Thought I'd throw this out there. If computationalism argues that
> > > zombies can't exist,
> >
> > I think the two ideas "zombies are impossible" and computationalism are
> > independent.  Where you might say they are related is that a disbelief in
> > zombies yields a strong argument for computationalism.
>
> I don't think that it's possible to say that any two ideas 'are'
> independent from each other.


Okay.  Perhaps 'independent' was not an ideal term, but computationalism is
at least not dependent on an argument against zombies, as far as I am aware.


> All ideas can be related through semantic
> association, however distant. As far as your point though, of course I
> see the opposite relation - while admitting even the possibility of
> zombies suggests computationalism is founded on illusion., but a
> disbelief in zombies gives no more support for computationalism than
> it does for materialism or panpsychism.
>

If one accepts that zombies are impossible, then to reject computationalism
requires also rejecting the possibility of Strong AI (
https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI ).


>
> >
> > > therefore anything that we cannot distinguish
> > > from a conscious person must be conscious, that also means that it is
> > > impossible to create something that acts like a person which is not a
> > > person. Zombies are not Turing emulable.
> >
> > I think there is a subtle difference in meaning between "it is impossible
> > to create something that acts like a person which is not a person" and
> > saying "Zombies are not Turing emulable".  It is important to remember
> that
> > the non-possibility of zombies doesn't imply a particular person or thing
> > cannot be emulated, rather it means there is a particular consequence of
> > certain Turing emulations which is unavoidable, namely the
> > consciousness/mind/person.
>
> That's true, in the sense that emulable can only refer to a specific
> natural and real process being emulated rather than a fictional one.
> You have a valid point that the word emulable isn't the best term, but
> it's a red herring since the point I was making is that it would not
> be possible to avoid creating sentience in any sufficiently
> sophisticated cartoon, sculpture, or graphic representation of a
> person. Call it emulation, simulation, synthesis, whatever, the result
> is the same.


I think you and I have different mental models for what is entailed by
"emulation, simulation, synthesis".  Cartoons, sculptures, recordings,
projections, and so on, don't necessarily compute anything (or at least,
what they might depict as being computed can have little or no relation to
what is actually computed by said cartoon, sculpture, recording,
projection...  For actual computation you need counterfactuals conditions.
A cartoon depicting an AND gate is not required to behave as a genuine AND
gate would, and flashing a few frames depicting what such an AND gate might
do is not equivalent to the logical decision of an AND gate.


> You can't make a machine that acts like a person without
> it becoming a person automatically. That clearly is ridiculous to me.
>

What do you think about Strong AI, do you think it is possible?  If so, if
the program that creates a strong AI were implemented on various
computational substrates, silicon, carbon nanotubes, pen and paper, pipes
and water, do you think any of them would yield a mind that is conscious?
If yes, do you think the content of that AI's consciousness would differ
depending on the substrate?  And finally, if you believe at least some
substrates would be conscious, are there any cases where the AI would
respond or behave differently on one substrate or the other (in terms of
the Strong AI program's output) when given equivalent input?


>
> >
> >
> >
> > > If we run the zombie argument backwards then, at what substitution
> > > level of zombiehood does a (completely possible) simulated person
> > > become an (non-Turing emulable) unconscious puppet? How bad of a
> > > simulation does it have to be before becoming an impossible zombie?
> >
> > > This to me reveals an absurdity of arithmetic realism. Pinocchio the
> > > boy is possible to simulate mechanically, but Pinocchio the puppet is
> > > impossible. Doesn't that strike anyone else as an obvious deal breaker?
> >
> > Not every Turing emulable process is necessarily conscious.
>
> Why not? What makes them unconscious?


In my guess, it would be a lack of sophistication.  For example, one
program might simply consist of a for loop iterating from 1 to 10.  Is this
program conscious?  I don't know, but it almost certainly isn't conscious
in the way you or I are.


> You can't draw the line in one
> direction but not the other. If you say that anything that seems to
> act alive well enough must be al

Re: Question about PA and 1p

2012-01-14 Thread meekerdb

On 1/14/2012 10:32 PM, Stephen P. King wrote:

On 1/15/2012 1:07 AM, meekerdb wrote:

On 1/14/2012 6:21 PM, Stephen P. King wrote:

On 1/14/2012 4:05 PM, meekerdb wrote:

On 1/14/2012 10:41 AM, Stephen P. King wrote:
I suppose that that is the case, but how do mathematical entities implement 
themselves other than via physical processes? We seem to be thinking that this is a 
solvable "Chicken and Egg" problem and I argue that we cannot use the argument of 
reduction to solve it. We must have both the physical and the mental, not at the 
primitive level of existence to be sure, but at the level where they have meaning. 


Suppose there are characters in a computer game that have very sophisticated AI. 
Don't events in the game have meaning for them?  The meaning is implicit in the 
actions and reactions.


Brent

Hi Brent,

Let us consider your idea carefully as you are asking an important question, I 
think. Those NPC (non-player characters), is their behavior the result of a finite 
list of if X then Y statements or equivalents? 


Dunno. If I were writing it I'd probably throw in a little randomness as well as 
functions with self-modification to allow learning.


How would these not included in the finite list of If - then rules?



Where does the possibility of "to whom-ness" lie for that list of if then statements? 


I don't know what "to whom-ness" means.


Speculate what I might mean...


Why speculate when I can ask you?





How does a per-specified list of properties encode a "sense of self"? 


I'm not sure what you mean by "sense of self".  The AI would encode the position and 
state of the character, including values, plans, self evaluation, etc.


How do you encode a map of L has M properties including location in a way that is 
updatable, or equivalently, for the fixed (with respect to virtual location) how do you 
encode changes in the environment with respect to the system such that there is a finite 
upper bound on the recursions of maps within maps? 


I'm not sure what you're talking about?  What's a "map of L"?  Are you asking how to write 
an AI program?  Whether to use hash tables or matrices or linked lists?


However I encode changes in the environment, making the recursions finite is pretty much 
taken care of by hardware space and time limits.


It seems to me that a "sense of self" is at least some form of model that quantifies the 
distinctions of what it is versus what it is not. Some set membership function would 
work, maybe. But this too seems to be encodable in if then rules...




Forget the anthropomorphic stuff, lets focus on the 1p stuff here. How do we bridge 
between the per-specified list of if then's to a coherent notion of 1p?


By making the AI behave like a person.  How do you know there's a gap to be 
bridged?


What is a "person? Beware of circular definitions! 


I am and I guess you are.

Brent

I am not assuming a gap, I am just trying to reason through this thought experiment with 
you.


Onward!

Stephen



Brent





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread Stephen P. King

On 1/15/2012 1:07 AM, meekerdb wrote:

On 1/14/2012 6:21 PM, Stephen P. King wrote:

On 1/14/2012 4:05 PM, meekerdb wrote:

On 1/14/2012 10:41 AM, Stephen P. King wrote:
I suppose that that is the case, but how do mathematical 
entities implement themselves other than via physical processes? We 
seem to be thinking that this is a solvable "Chicken and Egg" 
problem and I argue that we cannot use the argument of reduction to 
solve it. We must have both the physical and the mental, not at the 
primitive level of existence to be sure, but at the level where 
they have meaning. 


Suppose there are characters in a computer game that have very 
sophisticated AI. Don't events in the game have meaning for them?  
The meaning is implicit in the actions and reactions.


Brent

Hi Brent,

Let us consider your idea carefully as you are asking an 
important question, I think. Those NPC (non-player characters), is 
their behavior the result of a finite list of if X then Y statements 
or equivalents? 


Dunno. If I were writing it I'd probably throw in a little randomness 
as well as functions with self-modification to allow learning.


How would these not included in the finite list of If - then rules?



Where does the possibility of "to whom-ness" lie for that list of if 
then statements? 


I don't know what "to whom-ness" means.


Speculate what I might mean...



How does a per-specified list of properties encode a "sense of self"? 


I'm not sure what you mean by "sense of self".  The AI would encode 
the position and state of the character, including values, plans, self 
evaluation, etc.


How do you encode a map of L has M properties including location in 
a way that is updatable, or equivalently, for the fixed (with respect to 
virtual location) how do you encode changes in the environment with 
respect to the system such that there is a finite upper bound on the 
recursions of maps within maps? It seems to me that a "sense of self" is 
at least some form of model that quantifies the distinctions of what it 
is versus what it is not. Some set membership function would work, 
maybe. But this too seems to be encodable in if then rules...




Forget the anthropomorphic stuff, lets focus on the 1p stuff here. 
How do we bridge between the per-specified list of if then's to a 
coherent notion of 1p?


By making the AI behave like a person.  How do you know there's a gap 
to be bridged?


What is a "person? Beware of circular definitions! I am not 
assuming a gap, I am just trying to reason through this thought 
experiment with you.


Onward!

Stephen



Brent



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread meekerdb

On 1/14/2012 7:44 PM, Craig Weinberg wrote:

On Jan 14, 4:55 pm, meekerdb  wrote:

On 1/14/2012 11:38 AM, Craig Weinberg wrote:


Thought I'd throw this out there. If computationalism argues that
zombies can't exist, therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.

No. It only follows that zombies are not Turing emulable unless people are too. 
But why
would you suppose people are not emulable?

No, I'm assuming for the sake of argument that people are Turing
emulable, but my point is that the proposition that zombies are
impossible means that no Turing simulation of consciousness is
possible that is not actually conscious. It means that I can't make a
Pinocchio program because the 'before' puppet and the 'after' boy must
be the same thing - a boy. There can be no sophisticated, interactive
puppets in computationalism.


Right, not if they are as sophisticated and interactive as humans and animals we take to 
be conscious.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread meekerdb

On 1/14/2012 6:21 PM, Stephen P. King wrote:

On 1/14/2012 4:05 PM, meekerdb wrote:

On 1/14/2012 10:41 AM, Stephen P. King wrote:
I suppose that that is the case, but how do mathematical entities implement 
themselves other than via physical processes? We seem to be thinking that this is a 
solvable "Chicken and Egg" problem and I argue that we cannot use the argument of 
reduction to solve it. We must have both the physical and the mental, not at the 
primitive level of existence to be sure, but at the level where they have meaning. 


Suppose there are characters in a computer game that have very sophisticated AI. Don't 
events in the game have meaning for them?  The meaning is implicit in the actions and 
reactions.


Brent

Hi Brent,

Let us consider your idea carefully as you are asking an important question, I 
think. Those NPC (non-player characters), is their behavior the result of a finite list 
of if X then Y statements or equivalents? 


Dunno. If I were writing it I'd probably throw in a little randomness as well as functions 
with self-modification to allow learning.


Where does the possibility of "to whom-ness" lie for that list of if then statements? 


I don't know what "to whom-ness" means.

How does a per-specified list of properties encode a "sense of self"? 


I'm not sure what you mean by "sense of self".  The AI would encode the position and state 
of the character, including values, plans, self evaluation, etc.


Forget the anthropomorphic stuff, lets focus on the 1p stuff here. How do we bridge 
between the per-specified list of if then's to a coherent notion of 1p?


By making the AI behave like a person.  How do you know there's a gap to be 
bridged?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Craig Weinberg
On Jan 14, 4:55 pm, meekerdb  wrote:
> On 1/14/2012 11:38 AM, Craig Weinberg wrote:
>
> > Thought I'd throw this out there. If computationalism argues that
> > zombies can't exist, therefore anything that we cannot distinguish
> > from a conscious person must be conscious, that also means that it is
> > impossible to create something that acts like a person which is not a
> > person. Zombies are not Turing emulable.
>
> No. It only follows that zombies are not Turing emulable unless people are 
> too. But why
> would you suppose people are not emulable?

No, I'm assuming for the sake of argument that people are Turing
emulable, but my point is that the proposition that zombies are
impossible means that no Turing simulation of consciousness is
possible that is not actually conscious. It means that I can't make a
Pinocchio program because the 'before' puppet and the 'after' boy must
be the same thing - a boy. There can be no sophisticated, interactive
puppets in computationalism.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Craig Weinberg
On Jan 14, 4:41 pm, Jason Resch  wrote:
> On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg wrote:
>
> > Thought I'd throw this out there. If computationalism argues that
> > zombies can't exist,
>
> I think the two ideas "zombies are impossible" and computationalism are
> independent.  Where you might say they are related is that a disbelief in
> zombies yields a strong argument for computationalism.

I don't think that it's possible to say that any two ideas 'are'
independent from each other. All ideas can be related through semantic
association, however distant. As far as your point though, of course I
see the opposite relation - while admitting even the possibility of
zombies suggests computationalism is founded on illusion., but a
disbelief in zombies gives no more support for computationalism than
it does for materialism or panpsychism.

>
> > therefore anything that we cannot distinguish
> > from a conscious person must be conscious, that also means that it is
> > impossible to create something that acts like a person which is not a
> > person. Zombies are not Turing emulable.
>
> I think there is a subtle difference in meaning between "it is impossible
> to create something that acts like a person which is not a person" and
> saying "Zombies are not Turing emulable".  It is important to remember that
> the non-possibility of zombies doesn't imply a particular person or thing
> cannot be emulated, rather it means there is a particular consequence of
> certain Turing emulations which is unavoidable, namely the
> consciousness/mind/person.

That's true, in the sense that emulable can only refer to a specific
natural and real process being emulated rather than a fictional one.
You have a valid point that the word emulable isn't the best term, but
it's a red herring since the point I was making is that it would not
be possible to avoid creating sentience in any sufficiently
sophisticated cartoon, sculpture, or graphic representation of a
person. Call it emulation, simulation, synthesis, whatever, the result
is the same. You can't make a machine that acts like a person without
it becoming a person automatically. That clearly is ridiculous to me.

>
>
>
> > If we run the zombie argument backwards then, at what substitution
> > level of zombiehood does a (completely possible) simulated person
> > become an (non-Turing emulable) unconscious puppet? How bad of a
> > simulation does it have to be before becoming an impossible zombie?
>
> > This to me reveals an absurdity of arithmetic realism. Pinocchio the
> > boy is possible to simulate mechanically, but Pinocchio the puppet is
> > impossible. Doesn't that strike anyone else as an obvious deal breaker?
>
> Not every Turing emulable process is necessarily conscious.

Why not? What makes them unconscious? You can't draw the line in one
direction but not the other. If you say that anything that seems to
act alive well enough must be alive, then you also have to say that
anything that does not seem conscious may just be poorly programmed.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread Stephen P. King

On 1/14/2012 4:05 PM, meekerdb wrote:

On 1/14/2012 10:41 AM, Stephen P. King wrote:
I suppose that that is the case, but how do mathematical entities 
implement themselves other than via physical processes? We seem to be 
thinking that this is a solvable "Chicken and Egg" problem and I 
argue that we cannot use the argument of reduction to solve it. We 
must have both the physical and the mental, not at the primitive 
level of existence to be sure, but at the level where they have meaning. 


Suppose there are characters in a computer game that have very 
sophisticated AI. Don't events in the game have meaning for them?  The 
meaning is implicit in the actions and reactions.


Brent

Hi Brent,

Let us consider your idea carefully as you are asking an important 
question, I think. Those NPC (non-player characters), is their behavior 
the result of a finite list of if X then Y statements or equivalents? 
Where does the possibility of "to whom-ness" lie for that list of if 
then statements? How does a per-specified list of properties encode a 
"sense of self"? Forget the anthropomorphic stuff, lets focus on the 1p 
stuff here. How do we bridge between the per-specified list of if then's 
to a coherent notion of 1p?


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: JOINING Post and On measure alteration mechanisms and other practical tests for COMP

2012-01-14 Thread Russell Standish
On Sat, Jan 07, 2012 at 07:02:52AM +0200, acw wrote:
> On 1/6/2012 18:57, Bruno Marchal wrote:
> >
> >On 05 Jan 2012, at 11:02, acw wrote:
> 
> Thanks for replying. I was worried my post was too big and few
> people will bother reading it due to size. I hope to read your
> opinion on the viability of the experiment I presented in my
> original post.

Any chance you could break it up into smaller digestible pieces?

> 
> >
> >>
> >>
> >>To Bruno Marchal:
> >>
> >>Do you plan on ever publishing your thesis in english? My french is a
> >>bit rusty and it would take a rather long time to walk through it,
> >>however I did read the SANE and CC&Q papers, as well as a few others.
> >
> >I think that SANE is enough, although some people pushes me to submit to
> >some more public journal. It is not yet clear if physicist or logician
> >will understand. Physicists asks the good questions but don't have the
> >logical tools. Logicians have the right tools, but are not really
> >interested in the applied question. By tradition modern logicians
> >despise their philosophical origin. Some personal contingent problems
> >slow me down, too. Don't want to bore you with this.
> 
> If it's sufficient, I'll just have to read the right books to better
> understand AUDA, as it is now, I understood some parts, but also had
> trouble connecting some ideas in the AUDA.
> 
> >Maybe I should write a book. There is, on my url, a long version of the
> >thesis in french: "conscience et mécanisme", with all details, but then
> >it is 700 pages long, and even there, non-logician does not grasp the
> >logic. It is a pity but such kind of work reveals the abyssal gap
> >between logicians and physicists, and the Penrose misunderstanding of
> >Gödel's theorem has frightened the physicists to even take any look
> >further. To defend the thesis it took me more time to explain elementary
> >logic and computer science than philosophy of mind.
> >
> 
> A book would surely appeal to a larger audience, but a paper which
> only mentions the required reading could also be enough, although in
> the latter case fewer people would be willing to spend the time to
> understand it.

There is a project underway to translate "Secret de l'amibe" into
English, which IMHO is an even better introduction to the topic than
Bruno's theses (a lot of technical detail has been supressed to make
the central ideas digestible). We're about half way through at present
- its a volunteer project though, so it will probably be another year
or so before it is done/

> 
> >>
> >>Does anyone have a complete downloadable archive of this mailing list,
> >>besides the web-accessible google groups or nabble one?
> >>Google groups seems to badly group posts together and generates some
> >>duplicates for older posts.
> >
> >I agree. Google groups are not practical. The first old archive were
> >very nice (Escribe); but like with all software, archiving get worst
> >with time. nabble is already better, and I don't know if there are other
> >one. Note also that the everything list, maintained by Wei Dai, is a
> >list lasting since a long time, so that the total archive must be rather
> >huge. Thanks to Wei Dai to maintain the list, despite the ASSA people
> >(Hal Finney, Wei Dai in some post, Schmidhuber, ...) seems to have quit
> >after losing the argument with the RSSA people. Well, to be sure Russell
> >Standish still use ASSA, it seems to me, and I have always defended the
> >idea that ASSA is indeed not completely non sensical, although it
> >concerns more the geography than the physics, in the comp frame.
> >
> If someone from those early times still has the posts, it might be
> nice if they decided to post an archive (such as a mailer spool).
> For large Usenet groups, it's not unusual for people to have
> personal archives, even from 1980's and earlier.
> 

I have often thought this would be a very useful resource - sadly I
never kept my own archive. It would probably be a good idea to webbot
/ spider to download the contents of the archives as they currently exist.

> I had no idea that was the reason I don't seem them post
> anymore(when I was looking at older posts, I saw they used to post
> here).
> 

For most people, the everything list is a side interest, and other
priorities and interests will interfere with particpation. Bruno is
one of the few people who has dedicated his life to this topic, so one
shouldn't be too surprised if other people leave the list out of exhaustion :).

> As for losing the  "RSSA vs ASSA" debate, what was the conclusive
> argument that tilts the favor toward RSSA (if it's too long, linking
> to the thread will do)? In my personal opinion, I used to initially
> consider ASSA as generally true, because assuming continuity of
> consciousness is a stronger hypothesis, despite being 'felt' from
> the inside, but then I realized that if I'm assuming
> consciousness/mind, I might as well assume continuity as well (from
> the perspective of the observe

Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread meekerdb

On 1/14/2012 11:38 AM, Craig Weinberg wrote:

Thought I'd throw this out there. If computationalism argues that
zombies can't exist, therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.


No. It only follows that zombies are not Turing emulable unless people are too. But why 
would you suppose people are not emulable?


Brent



If we run the zombie argument backwards then, at what substitution
level of zombiehood does a (completely possible) simulated person
become an (non-Turing emulable) unconscious puppet? How bad of a
simulation does it have to be before becoming an impossible zombie?

This to me reveals an absurdity of arithmetic realism. Pinocchio the
boy is possible to simulate mechanically, but Pinocchio the puppet is
impossible. Doesn't that strike anyone else as an obvious deal breaker?



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Jason Resch
On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg wrote:

> Thought I'd throw this out there. If computationalism argues that
> zombies can't exist,


I think the two ideas "zombies are impossible" and computationalism are
independent.  Where you might say they are related is that a disbelief in
zombies yields a strong argument for computationalism.


> therefore anything that we cannot distinguish
> from a conscious person must be conscious, that also means that it is
> impossible to create something that acts like a person which is not a
> person. Zombies are not Turing emulable.
>

I think there is a subtle difference in meaning between "it is impossible
to create something that acts like a person which is not a person" and
saying "Zombies are not Turing emulable".  It is important to remember that
the non-possibility of zombies doesn't imply a particular person or thing
cannot be emulated, rather it means there is a particular consequence of
certain Turing emulations which is unavoidable, namely the
consciousness/mind/person.



>
> If we run the zombie argument backwards then, at what substitution
> level of zombiehood does a (completely possible) simulated person
> become an (non-Turing emulable) unconscious puppet? How bad of a
> simulation does it have to be before becoming an impossible zombie?
>
> This to me reveals an absurdity of arithmetic realism. Pinocchio the
> boy is possible to simulate mechanically, but Pinocchio the puppet is
> impossible. Doesn't that strike anyone else as an obvious deal breaker?
>


Not every Turing emulable process is necessarily conscious.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread meekerdb

On 1/14/2012 10:41 AM, Stephen P. King wrote:
I suppose that that is the case, but how do mathematical entities implement 
themselves other than via physical processes? We seem to be thinking that this is a 
solvable "Chicken and Egg" problem and I argue that we cannot use the argument of 
reduction to solve it. We must have both the physical and the mental, not at the 
primitive level of existence to be sure, but at the level where they have meaning. 


Suppose there are characters in a computer game that have very sophisticated AI. Don't 
events in the game have meaning for them?  The meaning is implicit in the actions and 
reactions.


Brent


This is why I argue for a form of dualism that transforms into a neutral monism, like 
that of Russel, when taken to the level of ding and sich. At teh level of ding and sich 
difference itself vanishes and thus to argue that matter or number is primitive is a 
mute point. We must be careful that we are not collapsing the levels in our thinking 
about this. 


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Consciousness Easy, Zombies Hard

2012-01-14 Thread Craig Weinberg
Thought I'd throw this out there. If computationalism argues that
zombies can't exist, therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.

If we run the zombie argument backwards then, at what substitution
level of zombiehood does a (completely possible) simulated person
become an (non-Turing emulable) unconscious puppet? How bad of a
simulation does it have to be before becoming an impossible zombie?

This to me reveals an absurdity of arithmetic realism. Pinocchio the
boy is possible to simulate mechanically, but Pinocchio the puppet is
impossible. Doesn't that strike anyone else as an obvious deal breaker?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread Stephen P. King

On 1/14/2012 1:15 PM, Evgenii Rudnyi wrote:

On 14.01.2012 18:12 John Clark said the following:

On Fri, Jan 13, 2012  meekerdb  wrote:


There is no way consciousness can have a direct Darwinian
advantage

so it must be a byproduct of something that does have that virtue,
and the obvious candidate is intelligence.\



That's not so clear since we don't know exactly what is the
relation of consciousness to intelligence.  For a social animal
having an internal model of ones self and being able to model the
thought processes of others has obvious reproductive advantage.



To do any one of the things you suggest would require intelligence,
and indeed there is some evidence that in general social animals tend
to have a larger brain than similar species that are not social. But
at any rate we both seem to agree that Evolution can only see
behavior, so consciousness must be a byproduct of some sort of
complex behavior. Thus the Turing Test must be valid not only for
intelligence but for consciousness too.


How would you generalize the Turing Test for consciousness?

Evgenii


John K Clark




Hi,

Perhaps we can generalize the Turing test by insisting on questions 
that would require for their answer computational resources in excess of 
that would be available to a computer + power suply in a small room. 
Think of the Berkenstein bound 
 But the Turing Test 
is a bit of an oxymoron because it is impossible to prove the existence 
of something that is solely 1p. There is no 3p of consciousness. I 
recall Leibniz' discussion 
 of this...


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread Stephen P. King

Hi David,

On 1/14/2012 12:51 PM, David Nyman wrote:

On 14 January 2012 16:50, Stephen P. King  wrote:


The problem is that mathematics cannot represent matter other than by
invariance with respect to time, etc. absent an interpreter.

Sure, but do you mean to say that the interpreter must be physical?


What else would have such properties? If a system has no physical 
manifestations, how can we even know that it exists?



I don't see why.


What alternatives are there? Free floating minds? What means would 
exist to allow them to distinguish themselves from what they are not?



  And yet, as you say, the need for interpretation is
unavoidable.


Not just interpretation but something more basic. There is a 
requirement of communicability. If I cannot communicate an idea to you, 
can you even consider its properties or truth valuation?



  Now, my understanding of Bruno, after some fairly close
questioning (which may still leave me confused, of course) is that the
elements of his arithmetical ontology are strictly limited to numbers
(or their equivalent) + addition and multiplication.


And the entire time that this questioning and thinking was going on 
there where physical systems in the background doing their thing of 
manifesting patterns invariant to changes of some particular physical 
property, but that they where there and necessary is my point. We cannot 
even have digital substitution if there is no material to substitute. 0 
replaced by 0 is still 0.



  This emerged
during discussion of macroscopic compositional principles implicit in
the interpretation of micro-physical schemas; principles which are
rarely understood as being epistemological in nature.


This is where Bruno's result is important and I do not wish to 
detract anything at all from those epistemological implications, but 
that we can completely abstract away the "stuff" that acts at least as 
an interface between our minds as such is to cut our minds loose from 
even the mere possibility determining what one is and what one is not.



Hence, strictly
speaking, even the ascription of the notion of computation to
arrangements of these bare arithmetical elements assumes further
compositional principles and therefore appeals to some supplementary
epistemological "interpretation".


I suppose that that is the case, but how do mathematical entities 
implement themselves other than via physical processes? We seem to be 
thinking that this is a solvable "Chicken and Egg" problem and I argue 
that we cannot use the argument of reduction to solve it. We must have 
both the physical and the mental, not at the primitive level of 
existence to be sure, but at the level where they have meaning. This is 
why I argue for a form of dualism that transforms into a neutral monism, 
like that of Russel, when taken to the level of ding and sich. At teh 
level of ding and sich difference itself vanishes and thus to argue that 
matter or number is primitive is a mute point. We must be careful that 
we are not collapsing the levels in our thinking about this.




In other words, any bare ontological schema, uninterpreted, is unable,
from its own unsupplemented resources, to actualise whatever
higher-level emergents may be implicit within it.  But what else could
deliver that interpretation/actualisation?  What could embody the
collapse of ontology and epistemology into a single actuality?  Could
it be that interpretation is finally revealed only in the "conscious
merger" of these two polarities?


I agree We might even go so far as to claim that consciousness 
obtains in the juxtaposition of the polarities, but my claim against 
Bruno is that the poles (of mind and body)  vanish when the radius of 
the sphere (to follow the analogy) goes to zero.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread Evgenii Rudnyi

On 14.01.2012 18:12 John Clark said the following:

On Fri, Jan 13, 2012  meekerdb  wrote:


There is no way consciousness can have a direct Darwinian
advantage

so it must be a byproduct of something that does have that virtue,
and the obvious candidate is intelligence.\



That's not so clear since we don't know exactly what is the
relation of consciousness to intelligence.  For a social animal
having an internal model of ones self and being able to model the
thought processes of others has obvious reproductive advantage.



To do any one of the things you suggest would require intelligence,
and indeed there is some evidence that in general social animals tend
to have a larger brain than similar species that are not social. But
at any rate we both seem to agree that Evolution can only see
behavior, so consciousness must be a byproduct of some sort of
complex behavior. Thus the Turing Test must be valid not only for
intelligence but for consciousness too.


How would you generalize the Turing Test for consciousness?

Evgenii


John K Clark



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread Evgenii Rudnyi

On 14.01.2012 17:56 meekerdb said the following:

On 1/14/2012 12:08 AM, Evgenii Rudnyi wrote:

On 14.01.2012 03:06 meekerdb said the following:

On 1/13/2012 2:50 PM, Evgenii Rudnyi wrote:

On 13.01.2012 22:36 meekerdb said the following:

On 1/13/2012 12:54 PM, Evgenii Rudnyi wrote:


...


By the way in the Gray's book the term intelligence is not
even in the index. This was the biggest surprise for me
because I always thought that consciousness and
intelligence are related. Yet, after reading the book, I
agree now with the author that conscious experience is a
separate phenomenon.


So does Gray think that beings can be conscious without being
intelligent or intelligent without being conscious?


The first part is definite yes, for example "But we can, I believe,
 safely assume that mammals possess conscious experience."



But mammals are quite intelligent? More intelligent than self-driving
 cars for example. So then I'm left to wonder what Gray means by
"intelligent"; except you say he doesn't even use the term.


I agree that mammals are more intelligent than self-driving cars. Gray 
though does not discuss the term "intelligent", so I do not know his 
opinion to this end.




There is no clear answer for the second part in the book. Well, for
 example

"Language, for example, cannot be necessary for conscious
experience.


It's not necessary for awareness and perception, but I think it is
necessary for some kinds of ratiocination.


Yes, but it might be that ratiocination is not necessary for conscious 
experience.


Evgenii


The reverse, however, may be true: it may be that language (and
other functions) could not be evolved in the absence of conscious
experience".

It depends however on the definition, I would say that a
self-driving car is intelligent and a rock not, but even in this
case it is not completely clear to me how to define it
unambiguously.

Gray's personal position is that consciousness survival values is
"late error detection" that happens through some multipurpose and
multi-functional display. This fits actually quite good in
cybernetics but leaves a question open about the nature of such a
display.


But it leaves our imaginative planning.

Brent



Evgenii





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread John Clark
On Sat, Jan 14, 2012  Bruno Marchal  wrote:

> OK, but today we avoid the expression "computable number".
>

Why? Seems to me that quite a large number of people still use the term.  A
computable number is a real number that can be computed to any finite
amount of digits by a Turing Machine, however most irrational numbers,
nearly all in fact, are NOT computable . So the sort of numbers computers
or the human mind deals in can not be the only thing that is fundamental
because most numbers can not be derived from them.

> All natural number are computable
>

Yes, but very few numbers are natural numbers.

> With mechanism it is absolutely indifferent which fundamental finite
> object we admit.
>

If by "mechanism" you mean determinism then your remarks are irrelevant
because we don't live in a deterministic universe, and even the natural
numbers are not finite.

>>  There is no way consciousness can have a direct Darwinian advantage so
>> it must be a byproduct of something that does have that virtue, and the
>> obvious candidate is intelligence.
>>
>
> > I disagree. Consciousness has a "darwinian role" in the very origin of
> the physical realm.
>

If Evolution can't see something then it can't select for it, and it can't
see consciousness in others any better than we can, just like us all it can
see is behavior.

>>>like relative universal self-speedin
>>>
>>
> >>  I don't know what that means.
>>
>
> > It means making your faculty of decision, with respect to your most
> probable environment, more quick.
>

In other words thinking fast. The fastest signals in the human brain move
at a about 100 meters per second and many are far slower, the fastest
signals in a computer move at  300,000,000 meters per second.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread David Nyman
On 14 January 2012 16:50, Stephen P. King  wrote:

> The problem is that mathematics cannot represent matter other than by
> invariance with respect to time, etc. absent an interpreter.

Sure, but do you mean to say that the interpreter must be physical?  I
don't see why.  And yet, as you say, the need for interpretation is
unavoidable.  Now, my understanding of Bruno, after some fairly close
questioning (which may still leave me confused, of course) is that the
elements of his arithmetical ontology are strictly limited to numbers
(or their equivalent) + addition and multiplication.  This emerged
during discussion of macroscopic compositional principles implicit in
the interpretation of micro-physical schemas; principles which are
rarely understood as being epistemological in nature.  Hence, strictly
speaking, even the ascription of the notion of computation to
arrangements of these bare arithmetical elements assumes further
compositional principles and therefore appeals to some supplementary
epistemological "interpretation".

In other words, any bare ontological schema, uninterpreted, is unable,
from its own unsupplemented resources, to actualise whatever
higher-level emergents may be implicit within it.  But what else could
deliver that interpretation/actualisation?  What could embody the
collapse of ontology and epistemology into a single actuality?  Could
it be that interpretation is finally revealed only in the "conscious
merger" of these two polarities?

David

> Hi Bruno,
>
>     You seem to not understand the role that the physical plays at all! This
> reminds me of an inversion of how most people cannot understand the way that
> math is "abstract" and have to work very hard to understand notions like "in
> principle a coffee cup is the same as a doughnut".
>
>
> On 1/14/2012 6:58 AM, Bruno Marchal wrote:
>
>
> On 13 Jan 2012, at 18:24, Stephen P. King wrote:
>
> Hi Bruno,
>
> On 1/13/2012 4:38 AM, Bruno Marchal wrote:
>
> Hi Stephen,
>
> On 13 Jan 2012, at 00:58, Stephen P. King wrote:
>
> Hi Bruno,
>
> On 1/12/2012 1:01 PM, Bruno Marchal wrote:
>
>
> On 11 Jan 2012, at 19:35, acw wrote:
>
> On 1/11/2012 19:22, Stephen P. King wrote:
>
> Hi,
>
> I have a question. Does not the Tennenbaum Theorem prevent the concept
> of first person plural from having a coherent meaning, since it seems to
> makes PA unique and singular? In other words, how can multiple copies of
> PA generate a plurality of first person since they would be an
> equivalence class. It seems to me that the concept of plurality of 1p
> requires a 3p to be coherent, but how does a 3p exist unless it is a 1p
> in the PA sense?
>
> Onward!
>
> Stephen
>
>
> My understanding of 1p plural is merely many 1p's sharing an apparent 3p
> world. That 3p world may or may not be globally coherent (it is most
> certainly locally coherent), and may or may not be computable, typically I
> imagine it as being locally computed by an infinity of TMs, from the 1p. At
> least one coherent 3p foundation exists as the UD, but that's something very
> different from the universe a structural realist would believe in (for
> example, 'this universe', or the MWI multiverse). So a coherent 3p
> foundation always exists, possibly an infinity of them. The parts (or even
> the whole) of the 3p foundation should be found within the UD.
>
> As for PA's consciousness, I don't know, maybe Bruno can say a lot more
> about this. My understanding of consciousness in Bruno's theory is that an
> OM(Observer Moment) corresponds to a Sigma-1 sentence.
>
>
> You can ascribe a sort of local consciousness to the person living,
> relatively to you, that Sigma_1 truth, but the person itself is really
> related to all the proofs (in Platonia) of that sentences (roughly
> speaking).
>
>
> OK, but that requires that I have a justification for a belief in Platonia.
> The closest that I can get to Platonia is something like the class of all
> verified proofs (which supervenes on some form of physical process.)
>
>
> You need just to believe that in the standard model of PA a sentence is true
> or false. I have not yet seen any book in math mentioning anything physical
> to define what that means.
> *All* math papers you cited assume no less.
>
>
>     I cannot understand how such an obvious concept is not understood, even
> the notion of universality assumes it. The point is that mathematical
> statements require some form of physicality to be known and communicated,
>
>
> OK. But they does not need phyicality to be just true. That's the point.
>
>
>     Surely, but the truthfulness of a mathematical statement is meaningless
> without the possibility of physical implementation. One cannot even know of
> it absent the possibility of the physical.
>
>
>
> it just is the case that the sentence, model, recursive algorithm, whatever
> concept, etc. is independent of any particular form of physical
> implementation but is not independent of all physical representations.
>
>
> Of course it is.

Re: An analogy for Qualia

2012-01-14 Thread John Clark
On Fri, Jan 13, 2012  meekerdb  wrote:

>  >>  There is no way consciousness can have a direct Darwinian advantage
> so it must be a byproduct of something that does have that virtue, and the
> obvious candidate is intelligence.\
>
>
>
> That's not so clear since we don't know exactly what is the relation of
> consciousness to intelligence.  For a social animal having an internal
> model of ones self and being able to model the thought processes of others
> has obvious reproductive advantage.
>

To do any one of the things you suggest would require intelligence, and
indeed there is some evidence that in general social animals tend to have a
larger brain than similar species that are not social. But at any rate we
both seem to agree that Evolution can only see behavior, so consciousness
must be a byproduct of some sort of complex behavior. Thus the Turing Test
must be valid not only for intelligence but for consciousness too.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread meekerdb

On 1/14/2012 12:08 AM, Evgenii Rudnyi wrote:

On 14.01.2012 03:06 meekerdb said the following:

On 1/13/2012 2:50 PM, Evgenii Rudnyi wrote:

On 13.01.2012 22:36 meekerdb said the following:

On 1/13/2012 12:54 PM, Evgenii Rudnyi wrote:


...


By the way in the Gray's book the term intelligence is not even
in the index. This was the biggest surprise for me because I
always thought that consciousness and intelligence are related.
Yet, after reading the book, I agree now with the author that
conscious experience is a separate phenomenon.


So does Gray think that beings can be conscious without being
intelligent or intelligent without being conscious?


The first part is definite yes, for example "But we can, I believe, safely assume that 
mammals possess conscious experience."



But mammals are quite intelligent?  More intelligent than self-driving cars for example. 
So then I'm left to wonder what Gray means by "intelligent"; except you say he doesn't 
even use the term.




There is no clear answer for the second part in the book. Well, for example

"Language, for example, cannot be necessary for conscious experience. 


It's not necessary for awareness and perception, but I think it is necessary for some 
kinds of ratiocination.


The reverse, however, may be true: it may be that language (and other functions) could 
not be evolved in the absence of conscious experience".


It depends however on the definition, I would say that a self-driving car is intelligent 
and a rock not, but even in this case it is not completely clear to me how to define it 
unambiguously.


Gray's personal position is that consciousness survival values is "late error detection" 
that happens through some multipurpose and multi-functional display. This fits actually 
quite good in cybernetics but leaves a question open about the nature of such a display.


But it leaves our imaginative planning.

Brent



Evgenii



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-14 Thread Stephen P. King

Hi Bruno,

You seem to not understand the role that the physical plays at all! 
This reminds me of an inversion of how most people cannot understand the 
way that math is "abstract" and have to work very hard to understand 
notions like "in principle a coffee cup is the same as a doughnut".


On 1/14/2012 6:58 AM, Bruno Marchal wrote:


On 13 Jan 2012, at 18:24, Stephen P. King wrote:


Hi Bruno,

On 1/13/2012 4:38 AM, Bruno Marchal wrote:

Hi Stephen,

On 13 Jan 2012, at 00:58, Stephen P. King wrote:


Hi Bruno,

On 1/12/2012 1:01 PM, Bruno Marchal wrote:


On 11 Jan 2012, at 19:35, acw wrote:


On 1/11/2012 19:22, Stephen P. King wrote:

Hi,

I have a question. Does not the Tennenbaum Theorem prevent the 
concept
of first person plural from having a coherent meaning, since it 
seems to
makes PA unique and singular? In other words, how can multiple 
copies of

PA generate a plurality of first person since they would be an
equivalence class. It seems to me that the concept of plurality 
of 1p
requires a 3p to be coherent, but how does a 3p exist unless it 
is a 1p

in the PA sense?

Onward!

Stephen



My understanding of 1p plural is merely many 1p's sharing an 
apparent 3p world. That 3p world may or may not be globally 
coherent (it is most certainly locally coherent), and may or may 
not be computable, typically I imagine it as being locally 
computed by an infinity of TMs, from the 1p. At least one 
coherent 3p foundation exists as the UD, but that's something 
very different from the universe a structural realist would 
believe in (for example, 'this universe', or the MWI multiverse). 
So a coherent 3p foundation always exists, possibly an infinity 
of them. The parts (or even the whole) of the 3p foundation 
should be found within the UD.


As for PA's consciousness, I don't know, maybe Bruno can say a 
lot more about this. My understanding of consciousness in Bruno's 
theory is that an OM(Observer Moment) corresponds to a Sigma-1 
sentence.


You can ascribe a sort of local consciousness to the person 
living, relatively to you, that Sigma_1 truth, but the person 
itself is really related to all the proofs (in Platonia) of that 
sentences (roughly speaking).


OK, but that requires that I have a justification for a belief in 
Platonia. The closest that I can get to Platonia is something like 
the class of all verified proofs (which supervenes on some form of 
physical process.)


You need just to believe that in the standard model of PA a sentence 
is true or false. I have not yet seen any book in math mentioning 
anything physical to define what that means.

*All* math papers you cited assume no less.



I cannot understand how such an obvious concept is not 
understood, even the notion of universality assumes it. The point is 
that mathematical statements require some form of physicality to be 
known and communicated,


OK. But they does not need phyicality to be just true. That's the point.


Surely, but the truthfulness of a mathematical statement is 
meaningless without the possibility of physical implementation. One 
cannot even know of it absent the possibility of the physical.




it just is the case that the sentence, model, recursive algorithm, 
whatever concept, etc. is independent of any particular form of 
physical implementation but is not independent of all physical 
representations.


Of course it is. When you reason in PA you don't use any axiom 
referring to physics. To say that you need a physical brain begs the 
question *and* is a level-of-reasoning error.


PA does need to have any axioms that refer to physics. The fact 
that PA is inferred from patterns of chalk on a chalk board or patterns 
of ink on a whiteboard or patterns of pixels on a computer monitor or 
patterns of scratches in the dust or ... is sufficient to establish the 
truth of what I am saying. If you remove the possibility of physical 
implementation you also remove the possibility of meaningfulness.




We cannot completely abstract away the role played by the physical 
world.


That's what we do in math.


Yes, but all the while the physical world is the substrate for our 
patterns without which there is meaninglessness.









I simply cannot see how Sigma_1 sentences can interface with each 
other such that one can "know" anything about another absent some 
form of physicality.


The "interfaces" and the relative implementations are defined using 
addition and multiplication only, like in Gödel's original paper. 
Then UDA shows why physicality is an emergent pattern in the mind of 
number, and why it has to be like that if comp is true. AUDA shows 
how to make the derivation.


No, you have only proven that the idea that the physicalist idea 
that "mind is an epiphenomena" is false,


No. I show that the physical reality is not an ontological reality, 
once we assume we are (even material) machine.


And I agree, the physical is not a primitive in the existential 
sense, but 

Re: An analogy for Qualia

2012-01-14 Thread Bruno Marchal


On 13 Jan 2012, at 17:30, John Clark wrote:


On Thu, Jan 12, 2012  Bruno Marchal  wrote:


> I am not entirely sure what you mean by computable numbers (I  
guess you mean function).


A computable number is a number that can be approximated by a  
computable function, and a computable function is a function that  
can be evaluated with a mechanical device given unlimited time and  
storage space. Turing's famous 1936 paper where among other things  
he introduced the idea of what we now call a "Turing Machine" was  
called:


"On Computable Numbers, with an Application to the  
Entscheidungsproblem".


Turing showed that a very few real numbers, like the integers and  
the rational numbers, have formulas to calculate their value as  
closely as you'd like, but for the vast majority of numbers there is  
no way to do this. There are a few more numbers like PI that are  
computable with algorithms like PI= (4/1)-(4/3)+(4/5)- 
(4/7)+(4/9) , but for most  numbers there is nothing like that  
and no way to approximate their value. In fact he showed that almost  
all the numbers on the real number line are non-computable. There  
are LITERALLY infinitely more non-computable numbers than there are  
computable numbers; Turing proved that these numbers exist but  
ironically, despite their ubiquitous nature, neither Turing nor  
anybody else can unambiguously point to a single one of these  
numbers because there is no way to derive such a number from the  
numbers that we can point to, the computable numbers.


So numbers, at least the numbers we or computers can use, cannot be  
the only fundamental thing, non-computable numbers must be too. My  
point was that if there are 2 general classes of fundamental things  
that can not be simplified then there might be more. I think the  
intelligence-consciousness link is a third fundamental thing, but  
unlike Turing I can not prove it. And there may be fundamental  
things that we can never prove are fundamental, truth and proof are  
not the same thing.


OK, but today we avoid the expression "computable number". All natural  
number are computable, and we use the term computable function, and we  
represent computable real number by computable function from N to N.


With mechanism it is absolutely indifferent which fundamental finite  
object we admit. I  use numbers, but combinatoirs or java programs  
would be equivalent with that regard. So many things can be judged  
fundamental, but once we chose the basically ontology, the other  
things becomes derived notions.






> We can even ascribe it [consciousness] a role (explaining its  
Darwinian advantage)


There is no way consciousness can have a direct Darwinian advantage  
so it must be a byproduct of something that does have that virtue,  
and the obvious candidate is intelligence.


I disagree. Consciousness has a "darwinian role" in the very origin of  
the physical realm. This is not obvious, and counter-intuitive, so I  
don't expect you to grasp this before getting familiar with the UD  
consequences.






> like relative universal self-speeding.

I don't know what that means.


It means making your faculty of decision, with respect to your most  
probable environment, more quick.






> I suggest that the quantum nature of the observable reality might  
reflect the discovery that we are in that 'digital matrix'.


I don't know if that's true or not, but I do know that if I get too  
close to even the most beautiful and detailed picture on my computer  
screen I start to see individual pixels; and sometimes late at night  
I speculate that somebody made a programing mistake and tried to  
divide by zero at the singularity in the center of a Black Hole.


> I think that here you miss the UDA point.

That is entirely possible because I am unable to follow what you  
call your dovetailing argument; I really don't think you have stated  
it as clearly as you could.


I have stated in 100 step version, 15-step version, 6 step version,  
but since many years I stick on the 8-step version for it is the one  
which people understand the more easily. It is in the sane04 paper,  
and you can ask any question. The seven first step are rather easy and  
most people understand it without problem. It already show the  
reversal. If you want I can re-explain it step by step.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-14 Thread Evgenii Rudnyi

On 14.01.2012 03:06 meekerdb said the following:

On 1/13/2012 2:50 PM, Evgenii Rudnyi wrote:

On 13.01.2012 22:36 meekerdb said the following:

On 1/13/2012 12:54 PM, Evgenii Rudnyi wrote:


...


By the way in the Gray's book the term intelligence is not even
in the index. This was the biggest surprise for me because I
always thought that consciousness and intelligence are related.
Yet, after reading the book, I agree now with the author that
conscious experience is a separate phenomenon.


So does Gray think that beings can be conscious without being
intelligent or intelligent without being conscious?


The first part is definite yes, for example "But we can, I believe, 
safely assume that mammals possess conscious experience."


There is no clear answer for the second part in the book. Well, for example

"Language, for example, cannot be necessary for conscious experience. 
The reverse, however, may be true: it may be that language (and other 
functions) could not be evolved in the absence of conscious experience".


It depends however on the definition, I would say that a self-driving 
car is intelligent and a rock not, but even in this case it is not 
completely clear to me how to define it unambiguously.


Gray's personal position is that consciousness survival values is "late 
error detection" that happens through some multipurpose and 
multi-functional display. This fits actually quite good in cybernetics 
but leaves a question open about the nature of such a display.


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.