Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 16 Jan 2012, at 07:52, Quentin Anciaux wrote:




2012/1/16 Craig Weinberg whatsons...@gmail.com

On Jan 15, 3:07 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/1/14 Craig Weinberg whatsons...@gmail.com

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist, therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that  
it is
  impossible to create something that acts like a person which is  
not a

  person. Zombies are not Turing emulable.

 No, zombies *that are persons in every aspect* are impossible. Not  
only not

 turing emulable... they are absurd.

If you define them that way then the word has no meaning. What is a
person in every aspect that is not at all a person?

The *only thing* a zombie lacks is consciousness... every other  
aspects of a persons, it has it.


That is right. People should not confuse the Hollywood zombie and the  
philosophical zombie which are 3p-identical to human person, but  
lack any 1-p perspective.


Note also that Turing invented his test to avoid the philosophical  
hard issue of consciousness. In a nutshell Turing defines  
consciousness by having an intelligent behavior. The Turing test  
is equivalent with a type of no zombie principle.


It is like saying that if zombie exist, you have to treat them as  
human being, because we cannot know if they are zombie.






The only way the
term has meaning is when it is used to define something that appears
to be a person in every way to an outside observer (and that would
ultimately have to be a human observer) but has no interior
experience. That is not absurd at all, and in fact describes
animation, puppetry, and machine intelligence.

Puppetries, animations do not act like a person. They act like  
puppetries, animations. A philosophical zombie *acts like a person  
but lacks consciousness*.


Exactly.

Bruno







  If we run the zombie argument backwards then, at what substitution
  level of zombiehood does a (completely possible) simulated person
  become an (non-Turing emulable) unconscious puppet? How bad of a
  simulation does it have to be before becoming an impossible  
zombie?


  This to me reveals an absurdity of arithmetic realism. Pinocchio  
the
  boy is possible to simulate mechanically, but Pinocchio the  
puppet is

  impossible.

 You conflate two (mayve more) notions of zombie... the only one  
important
 in the zombie argument is this: something that act like a person  
in
 every aspects*** but nonetheless is not conscious... If it is  
indeed what
 you mean, then could you devise a test that could show that the  
zombie
 indeed lacks consciousness (remember that *by definition* you  
cannot tell

 apart the zombie and a real conscious person).

No, I think that I have a workable and useful notion of zombie. I'm
not sure how the definition you are trying use is meaningful. It seems
like a straw man of the zombie issue. We already know that
subjectivity is private, what we don't know is whether that means that
simulations automatically acquire consciousness or not. The zombie
issue is not to show that we can't imagine a person without
subjectivity and see that as evidence that subjectivity must
inherently arise from function. My point is that it also must mean
that we cannot stop inanimate objects from acquiring consciousness if
they are a sufficiently sophisticated simulation.

Craig

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-16 Thread Bruno Marchal


On 14 Jan 2012, at 19:00, John Clark wrote:


On Sat, Jan 14, 2012  Bruno Marchal marc...@ulb.ac.be wrote:

 OK, but today we avoid the expression computable number.

Why? Seems to me that quite a large number of people still use the  
term.  A computable number is a real number that can be computed to  
any finite amount of digits by a Turing Machine, however most  
irrational numbers, nearly all in fact, are NOT computable . So the  
sort of numbers computers or the human mind deals in can not be the  
only thing that is fundamental because most numbers can not be  
derived from them.


 All natural number are computable

Yes, but very few numbers are natural numbers.


But in computability theory we have only natural numbers. A real  
number like PI or e is modeled by a total computable function from N  
to N. It makes things simpler. there is no real theory of  
computability for the real numbers. There are no equivalent to the  
Church Turing thesis for them. And with comp we don't need any  
ontological numbers other than the natural numbers. The whole of  
analysis and physics is eventually made espistemological (number's  
ideas).








 With mechanism it is absolutely indifferent which fundamental  
finite object we admit.


If by mechanism you mean determinism then your remarks are  
irrelevant because we don't live in a deterministic universe, and  
even the natural numbers are not finite.


No. By mechanism I mean the idea that the brain (or whatever needed  
for consciousness) is Turing emulable. This shows immediately (UDA1-3)  
that we live in a non deterministic reality. Non determinism is a  
simple consequence of mechanism, which arise from self-duplication.








  There is no way consciousness can have a direct Darwinian  
advantage so it must be a byproduct of something that does have that  
virtue, and the obvious candidate is intelligence.


 I disagree. Consciousness has a darwinian role in the very  
origin of the physical realm.


If Evolution can't see something then it can't select for it, and it  
can't see consciousness in others any better than we can, just like  
us all it can see is behavior.


I am talking on the Evolution of the physical laws. You have to follow  
the whole UDA to understand the special and crucial role of  
consciousness. Physical reality arise from the communicable first  
plural part of the consciousness flux existing in elementary  
arithmetic as a whole. I know this is not obvious at all. That's why  
it is a non trivial discovery. It makes physics a branch of  
mathematical computer science (alias number theory). By number I  
always mean natural number.







like relative universal self-speedin

  I don't know what that means.

 It means making your faculty of decision, with respect to your  
most probable environment, more quick.


In other words thinking fast. The fastest signals in the human brain  
move at a about 100 meters per second and many are far slower, the  
fastest signals in a computer move at  300,000,000 meters per second.


That's why consciousness plays a key role. Any slow universal machine  
can be arbitrarily speed up, on almost all its inputs,  by change of  
software. This is Blum speed-up theorem. Universal machine can always  
been optimized by change of software only, and one way to do that is  
allowing the machine to believe in non provable propositions. That's  
why biological evolution selected conscious machine. They know much  
more than what they can communicate, and eventually get puzzled by  
such knowledge.
BTW I tend to use competence for what you call intelligence.  
Intelligence requires consciousness in my approach and definitions.  
Competence needs some amount of intelligence, but it has a negative  
feedback on intelligence.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-16 Thread Bruno Marchal


On 14 Jan 2012, at 18:51, David Nyman wrote:

On 14 January 2012 16:50, Stephen P. King stephe...@charter.net  
wrote:



The problem is that mathematics cannot represent matter other than by
invariance with respect to time, etc. absent an interpreter.


Sure, but do you mean to say that the interpreter must be physical?  I
don't see why.  And yet, as you say, the need for interpretation is
unavoidable.  Now, my understanding of Bruno, after some fairly close
questioning (which may still leave me confused, of course) is that the
elements of his arithmetical ontology are strictly limited to numbers
(or their equivalent) + addition and multiplication.  This emerged
during discussion of macroscopic compositional principles implicit in
the interpretation of micro-physical schemas; principles which are
rarely understood as being epistemological in nature.  Hence, strictly
speaking, even the ascription of the notion of computation to
arrangements of these bare arithmetical elements assumes further
compositional principles and therefore appeals to some supplementary
epistemological interpretation.

In other words, any bare ontological schema, uninterpreted, is unable,
from its own unsupplemented resources, to actualise whatever
higher-level emergents may be implicit within it.  But what else could
deliver that interpretation/actualisation?  What could embody the
collapse of ontology and epistemology into a single actuality?  Could
it be that interpretation is finally revealed only in the conscious
merger of these two polarities?



Actually you can define computation, even universal machine, by using  
only addition and multiplication. So universal machine exists in  
elementary arithmetic in the same sense as in the existence of prime  
number. All the Bp  and Dp are pure arithmetical sentences. What  
cannot be defined is Bp  p, and we need to go out of the mind of the  
machine, and out of arithmetic, to provide the meaning, and machines  
can do that too. So, in arithmetic, you can find true statement about  
machine going outside of arithmetic. It is here that we have to be  
careful of not doing Searle's error of confusing levels, and that's  
why the epistemology internal in arithmetic can be bigger than  
arithmetic. Arithmetic itself does not believe in that epistemology,  
but it believes in numbers believing in them. Whatever you believe in  
will not been automatically believed by God, but God will always  
believe that you do believe in them.


Bruno










David


Hi Bruno,

You seem to not understand the role that the physical plays at  
all! This
reminds me of an inversion of how most people cannot understand the  
way that
math is abstract and have to work very hard to understand notions  
like in

principle a coffee cup is the same as a doughnut.


On 1/14/2012 6:58 AM, Bruno Marchal wrote:


On 13 Jan 2012, at 18:24, Stephen P. King wrote:

Hi Bruno,

On 1/13/2012 4:38 AM, Bruno Marchal wrote:

Hi Stephen,

On 13 Jan 2012, at 00:58, Stephen P. King wrote:

Hi Bruno,

On 1/12/2012 1:01 PM, Bruno Marchal wrote:


On 11 Jan 2012, at 19:35, acw wrote:

On 1/11/2012 19:22, Stephen P. King wrote:

Hi,

I have a question. Does not the Tennenbaum Theorem prevent the  
concept
of first person plural from having a coherent meaning, since it  
seems to
makes PA unique and singular? In other words, how can multiple  
copies of

PA generate a plurality of first person since they would be an
equivalence class. It seems to me that the concept of plurality of 1p
requires a 3p to be coherent, but how does a 3p exist unless it is  
a 1p

in the PA sense?

Onward!

Stephen


My understanding of 1p plural is merely many 1p's sharing an  
apparent 3p

world. That 3p world may or may not be globally coherent (it is most
certainly locally coherent), and may or may not be computable,  
typically I
imagine it as being locally computed by an infinity of TMs, from  
the 1p. At
least one coherent 3p foundation exists as the UD, but that's  
something very
different from the universe a structural realist would believe in  
(for

example, 'this universe', or the MWI multiverse). So a coherent 3p
foundation always exists, possibly an infinity of them. The parts  
(or even

the whole) of the 3p foundation should be found within the UD.

As for PA's consciousness, I don't know, maybe Bruno can say a lot  
more
about this. My understanding of consciousness in Bruno's theory is  
that an

OM(Observer Moment) corresponds to a Sigma-1 sentence.


You can ascribe a sort of local consciousness to the person living,
relatively to you, that Sigma_1 truth, but the person itself is  
really

related to all the proofs (in Platonia) of that sentences (roughly
speaking).


OK, but that requires that I have a justification for a belief in  
Platonia.
The closest that I can get to Platonia is something like the class  
of all

verified proofs (which supervenes on some form of physical process.)


You need just to believe that in 

Re: JOINING Post and On measure alteration mechanisms and other practical tests for COMP

2012-01-16 Thread Bruno Marchal


On 15 Jan 2012, at 00:17, Russell Standish wrote:


On Sat, Jan 07, 2012 at 07:02:52AM +0200, acw wrote:

On 1/6/2012 18:57, Bruno Marchal wrote:


On 05 Jan 2012, at 11:02, acw wrote:


Thanks for replying. I was worried my post was too big and few
people will bother reading it due to size. I hope to read your
opinion on the viability of the experiment I presented in my
original post.


Any chance you could break it up into smaller digestible pieces?


That would be good idea. read it twice, and generate too much comments  
in my head, and none seems to address the point. Now i am more busy,  
so acw will need to be patient I grasp his idea.












To Bruno Marchal:

Do you plan on ever publishing your thesis in english? My french  
is a

bit rusty and it would take a rather long time to walk through it,
however I did read the SANE and CCQ papers, as well as a few  
others.


I think that SANE is enough, although some people pushes me to  
submit to
some more public journal. It is not yet clear if physicist or  
logician
will understand. Physicists asks the good questions but don't have  
the

logical tools. Logicians have the right tools, but are not really
interested in the applied question. By tradition modern logicians
despise their philosophical origin. Some personal contingent  
problems

slow me down, too. Don't want to bore you with this.


If it's sufficient, I'll just have to read the right books to better
understand AUDA, as it is now, I understood some parts, but also had
trouble connecting some ideas in the AUDA.

Maybe I should write a book. There is, on my url, a long version  
of the
thesis in french: conscience et mécanisme, with all details, but  
then
it is 700 pages long, and even there, non-logician does not grasp  
the

logic. It is a pity but such kind of work reveals the abyssal gap
between logicians and physicists, and the Penrose misunderstanding  
of

Gödel's theorem has frightened the physicists to even take any look
further. To defend the thesis it took me more time to explain  
elementary

logic and computer science than philosophy of mind.



A book would surely appeal to a larger audience, but a paper which
only mentions the required reading could also be enough, although in
the latter case fewer people would be willing to spend the time to
understand it.


There is a project underway to translate Secret de l'amibe into
English, which IMHO is an even better introduction to the topic than
Bruno's theses (a lot of technical detail has been supressed to make
the central ideas digestible). We're about half way through at present
- its a volunteer project though, so it will probably be another year
or so before it is done/


Thanks to Russell and Kim.









Does anyone have a complete downloadable archive of this mailing  
list,

besides the web-accessible google groups or nabble one?
Google groups seems to badly group posts together and generates  
some

duplicates for older posts.


I agree. Google groups are not practical. The first old archive were
very nice (Escribe); but like with all software, archiving get worst
with time. nabble is already better, and I don't know if there are  
other

one. Note also that the everything list, maintained by Wei Dai, is a
list lasting since a long time, so that the total archive must be  
rather
huge. Thanks to Wei Dai to maintain the list, despite the ASSA  
people
(Hal Finney, Wei Dai in some post, Schmidhuber, ...) seems to have  
quit
after losing the argument with the RSSA people. Well, to be sure  
Russell
Standish still use ASSA, it seems to me, and I have always  
defended the

idea that ASSA is indeed not completely non sensical, although it
concerns more the geography than the physics, in the comp frame.


If someone from those early times still has the posts, it might be
nice if they decided to post an archive (such as a mailer spool).
For large Usenet groups, it's not unusual for people to have
personal archives, even from 1980's and earlier.



I have often thought this would be a very useful resource - sadly I
never kept my own archive. It would probably be a good idea to webbot
/ spider to download the contents of the archives as they currently  
exist.


That might be useful. Especially with things like NDAA, SOPA, etc.
Looks like deeper threats than usual accumulate on the free world.





I had no idea that was the reason I don't seem them post
anymore(when I was looking at older posts, I saw they used to post
here).



For most people, the everything list is a side interest, and other
priorities and interests will interfere with particpation. Bruno is
one of the few people who has dedicated his life to this topic, so one
shouldn't be too surprised if other people leave the list out of  
exhaustion :).


In cognitive science, many confuse science and philosophy. I like  
philosophy but it is not my job. I don't defend any truth, but only  
attempt to criticize invalid arguments.






As for losing the  RSSA 

Re: An analogy for Qualia

2012-01-16 Thread Bruno Marchal


On 15 Jan 2012, at 09:13, Evgenii Rudnyi wrote:


What about the Turing test for a person in that state to check if he  
still has consciousness?


As I said in another post, the very idea of the Turing test consists  
in avoiding completely the notion of consciousness.
I do disagree with Turing on this. We can build a theory of  
consciousness, including, like with comp, a theory having refutable  
consequences. Turing was still influenced by Vienna-like positivism.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: An analogy for Qualia

2012-01-16 Thread Bruno Marchal


On 15 Jan 2012, at 18:14, John Clark wrote:

On Sat, Jan 14, 2012 at 1:56 PM, Stephen P. King stephe...@charter.net 
 wrote:

 How would you generalize the Turing Test for consciousness?


By doing the exact same thing we do when we evaluate our fellow  
human beings, assume that there is a direct link between intelligent  
behavior and consciousness.


I agree with this. But we cannot test directly consciousness and  
intelligence. We can measure and evaluate competence, but it is domain  
dependent, and unrelated to intelligence and consciousness. Local  
zombie *can* exist. Any intelligent or conscious behavior can be  
ascribed to something not conscious, for a short period of time.




When one of our fellow creatures is drowsy they don't behave very  
intelligently and we assume they are less conscious than they were  
when they where taking a calculus exam. And when they are in a deep  
sleep, under  anesthesia, or dead they behave even less  
intelligently and we assume (even though there is no proof) that  
their consciousness is similarly effected.


With comp we can show that consciousness is never effected, but the  
relative manifestation of consciousness can be effected. Again, this  
is counter-intuitive. The brain seems gifted in making us believe in  
unconsciousness, but that is an illusion bring by dissociative  
subroutine, or even chemicals. It is weird, and I doubt it to be true,  
but with comp, consciousness is an inescapable prison. You can hope  
only for relative amnesia.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 15 Jan 2012, at 19:33, John Clark wrote:


 On Sat, Jan 14, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 If computationalism argues that zombies can't exist, therefore  
anything that we cannot distinguish from a conscious person must be  
conscious, that also means that it is impossible to create something  
that acts like a person which is not a person. Zombies are not  
Turing emulable.


Maybe. Zombie behavior is certainly Turing emulable but you are  
asking more than that and there is no way to prove what you want to  
know because it hinges on one important question: how can you tell  
if a zombie is a zombie? Brains are not my favorite meal but I don't  
think dietary preference or even unsightly skin blemishes are a good  
test for consciousness; I believe zombies have little if any  
consciousness because, at least as depicted in the movies, zombies  
act really really dumb. But maybe the film industry is inflicting an  
unfair stereotype on a persecuted minority and there are good hard  
working zombies out there who you don't hear about that write love  
poetry and teach at Harvard, if so then I think those zombies are  
conscious even if I would still find a polite excuse to decline  
their invitation to dinner.


 This to me reveals an absurdity of arithmetic realism. Pinocchio  
the boy is possible to simulate mechanically, but Pinocchio the  
puppet is impossible. Doesn't that strike anyone else as an obvious  
deal breaker?


I find nothing absurd about that and neither did Evolution. The  
parts of our brain that so dramatically separate us from other  
animals, the parts that deal with language and long term planing and  
mathematics took HUNDREDS of times longer to evolve than the parts  
responsible for intense emotion like pleasure, pain, fear, hate,  
jealousy and love. And why do you think it is that in this group and  
elsewhere everybody and their brother is pushing their own General  
Theory of Consciousness  but nobody even attempts a General Theory  
of Intelligence?


There are general theory of learning, like those of Case and Smith,  
Blum, Osherson, etc. But they are necessarily non constructive. They  
are not usable neither for building AI, nor for verifying if something  
is intelligent. It shows that Intelligence (competence) is an  
intrinsic hard subject with many non-comparable degrees of intelligence.
Intelligence is not programmable. It is only self-programmable, and it  
interests nobody, except philosophers and theologians. When machine  
will be intelligent, we will send them in camps or jails. Intelligence  
leads to dissidence. We pretend appreciating intelligence, but we  
invest a lost in preventing it, in both children and machine.







The reason is that theorizing about the one is easy but  theorizing  
about the other is hard, hellishly hard, and because when  
intelligence theories fail they fail with a loud thud that is  
obvious to all, but one consciousness theory works as well, or as  
badly, as any other.


See the work of Case and Smith. It is not well know because it is  
based on theoretical computer science (recursion theory) which is not  
well known. Those are definite interesting result there, even if not  
applicable. The non-union theorem of Blum shows that there is  
something uncomputably much more intelligent than a machine: a couple  
of machine. The theory is super-non-linear.




Consciousness theories are easy because there are no facts they need  
to explain,


What? With comp, not only you have to explain the qualia, but it has  
been proved that you have to explain the quanta as well, and this  
without assuming a physical reality.




but there is an astronomical number of things that need to be  
explained to understand how intelligence works.


Not really. It is just that intelligent things organize themselves in  
non predictable way, at all. The basic are simple (addition and  
multiplication) but the consequences are not boundable.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-16 Thread David Nyman
On 16 January 2012 10:04, Bruno Marchal marc...@ulb.ac.be wrote:

 Actually you can define computation, even universal machine, by using only
 addition and multiplication. So universal machine exists in elementary
 arithmetic in the same sense as in the existence of prime number.

That may be, but we were discussing interpretation.  As you say above:
YOU can define computation, even universal machine, by using only
addition and multiplication (my emphasis). But this is surely, as you
are wont to say, too quick.  Firstly, in what sense can numbers in
simple arithmetical relation define THEMSELVES as computation, or
indeed as anything else than what they simply are?  I think that the
ascription of self-interpretation to a bare ontology is superficial;
it conceals an implicit supplementary appeal to epistemology, and
indeed to a self.  Hence it appears that some perspectival union of
epistemology and ontology is a prerequisite of interpretation.

David


 On 14 Jan 2012, at 18:51, David Nyman wrote:

 On 14 January 2012 16:50, Stephen P. King stephe...@charter.net wrote:

 The problem is that mathematics cannot represent matter other than by
 invariance with respect to time, etc. absent an interpreter.


 Sure, but do you mean to say that the interpreter must be physical?  I
 don't see why.  And yet, as you say, the need for interpretation is
 unavoidable.  Now, my understanding of Bruno, after some fairly close
 questioning (which may still leave me confused, of course) is that the
 elements of his arithmetical ontology are strictly limited to numbers
 (or their equivalent) + addition and multiplication.  This emerged
 during discussion of macroscopic compositional principles implicit in
 the interpretation of micro-physical schemas; principles which are
 rarely understood as being epistemological in nature.  Hence, strictly
 speaking, even the ascription of the notion of computation to
 arrangements of these bare arithmetical elements assumes further
 compositional principles and therefore appeals to some supplementary
 epistemological interpretation.

 In other words, any bare ontological schema, uninterpreted, is unable,
 from its own unsupplemented resources, to actualise whatever
 higher-level emergents may be implicit within it.  But what else could
 deliver that interpretation/actualisation?  What could embody the
 collapse of ontology and epistemology into a single actuality?  Could
 it be that interpretation is finally revealed only in the conscious
 merger of these two polarities?



 Actually you can define computation, even universal machine, by using only
 addition and multiplication. So universal machine exists in elementary
 arithmetic in the same sense as in the existence of prime number. All the
 Bp  and Dp are pure arithmetical sentences. What cannot be defined is Bp
  p, and we need to go out of the mind of the machine, and out of
 arithmetic, to provide the meaning, and machines can do that too. So, in
 arithmetic, you can find true statement about machine going outside of
 arithmetic. It is here that we have to be careful of not doing Searle's
 error of confusing levels, and that's why the epistemology internal in
 arithmetic can be bigger than arithmetic. Arithmetic itself does not
 believe in that epistemology, but it believes in numbers believing in
 them. Whatever you believe in will not been automatically believed by God,
 but God will always believe that you do believe in them.

 Bruno










 David

 Hi Bruno,

    You seem to not understand the role that the physical plays at all!
 This
 reminds me of an inversion of how most people cannot understand the way
 that
 math is abstract and have to work very hard to understand notions like
 in
 principle a coffee cup is the same as a doughnut.


 On 1/14/2012 6:58 AM, Bruno Marchal wrote:


 On 13 Jan 2012, at 18:24, Stephen P. King wrote:

 Hi Bruno,

 On 1/13/2012 4:38 AM, Bruno Marchal wrote:

 Hi Stephen,

 On 13 Jan 2012, at 00:58, Stephen P. King wrote:

 Hi Bruno,

 On 1/12/2012 1:01 PM, Bruno Marchal wrote:


 On 11 Jan 2012, at 19:35, acw wrote:

 On 1/11/2012 19:22, Stephen P. King wrote:

 Hi,

 I have a question. Does not the Tennenbaum Theorem prevent the concept
 of first person plural from having a coherent meaning, since it seems to
 makes PA unique and singular? In other words, how can multiple copies of
 PA generate a plurality of first person since they would be an
 equivalence class. It seems to me that the concept of plurality of 1p
 requires a 3p to be coherent, but how does a 3p exist unless it is a 1p
 in the PA sense?

 Onward!

 Stephen


 My understanding of 1p plural is merely many 1p's sharing an apparent 3p
 world. That 3p world may or may not be globally coherent (it is most
 certainly locally coherent), and may or may not be computable, typically
 I
 imagine it as being locally computed by an infinity of TMs, from the 1p.
 At
 least one coherent 3p foundation exists as the UD, but that's something
 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 5:39 AM, Bruno Marchal marc...@ulb.ac.be wrote:

   Consciousness theories are easy because there are no facts they need to
 explain



 What? With comp, not only you have to explain the qualia


With ANY theory of consciousness you have to explain qualia, and every
consciousness theory does as well or as badly as any other in doing that.


  but it has been proved that you have to explain the quanta as well,


I don't know what that means.


  and this without assuming a physical reality.


 But I do know that assuming reality does not seem to be a totally
outrageous assumption.

  but there is an astronomical number of things that need to be explained
 to understand how intelligence works.


 Not really. It is just that [...]


If you know how intelligence works you can make a super intelligent
computer right now and you're well on your way to becoming a trillionaire.
It seems to me that when discussing this very complex subject people use
the phrase it's just a bit too much.

intelligent things organize themselves in non predictable way, at all. The
 basic are simple (addition and multiplication)


That's like saying I know how to cure cancer, it's basically simple, just
arrange the atoms in cancer cells so that they are no longer cancerous.
It's easy to learn the fundamentals of Chess, the rules of the game, but
that does not mean you understand all the complexities and subtleties of it
and are now a grandmaster.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.comwrote:

  I think that I have a workable and useful notion of zombie.



Then I would very much like to hear what it is. What really grabbed my
attention is that you said it was  workable and useful, so whatever
notion you have it can't include things like zombies are conscious but or
zombies are NOT conscious but because I have no way to directly test for
consciousness so such a notion would not be workable or useful to me.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 11:23 am, John Clark johnkcl...@gmail.com wrote:
 On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.comwrote:

   I think that I have a workable and useful notion of zombie.



 Then I would very much like to hear what it is. What really grabbed my
 attention is that you said it was  workable and useful, so whatever
 notion you have it can't include things like zombies are conscious but or
 zombies are NOT conscious but because I have no way to directly test for
 consciousness so such a notion would not be workable or useful to me.

Zombie describes something which seems like it could be conscious from
the outside (ie to a human observer) but actually is not.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
Craig,

Do you have an opinion regarding the possibility of Strong AI, and the
other questions I posed in my earlier post?

Thanks,

Jason

On Mon, Jan 16, 2012 at 10:50 AM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 11:23 am, John Clark johnkcl...@gmail.com wrote:
  On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
 
I think that I have a workable and useful notion of zombie.
 
 
 
  Then I would very much like to hear what it is. What really grabbed my
  attention is that you said it was  workable and useful, so whatever
  notion you have it can't include things like zombies are conscious but
 or
  zombies are NOT conscious but because I have no way to directly test
 for
  consciousness so such a notion would not be workable or useful to me.

 Zombie describes something which seems like it could be conscious from
 the outside (ie to a human observer) but actually is not.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-16 Thread Bruno Marchal


On 16 Jan 2012, at 15:32, David Nyman wrote:


On 16 January 2012 10:04, Bruno Marchal marc...@ulb.ac.be wrote:

Actually you can define computation, even universal machine, by  
using only
addition and multiplication. So universal machine exists in  
elementary

arithmetic in the same sense as in the existence of prime number.


That may be, but we were discussing interpretation.  As you say above:
YOU can define computation, even universal machine, by using only
addition and multiplication (my emphasis).


Not just ME. A tiny part of arithmetic can too. All universal numbers  
can do that. No need of first person notion. All this can be shown in  
a 3p way. Indeed, in arithmetic. Even without the induction axioms, so  
that we don't need Löbian machine.
The existence of the UD for example, is a theorem of (Robinson)  
arithmetic.
Now, that kinds of truth are rather long and tedious to show. This was  
shown mainly by Gödel in his 1931 paper (for rich Löbian theories).  
It is called arithmetization of meta-mathematics. I will try to  
explain the salt of it without being too much technical below.





But this is surely, as you
are wont to say, too quick.  Firstly, in what sense can numbers in
simple arithmetical relation define THEMSELVES as computation, or
indeed as anything else than what they simply are?


Here you ask a more difficult question. Nevertheless it admits a  
positive answer.





I think that the
ascription of self-interpretation to a bare ontology is superficial;
it conceals an implicit supplementary appeal to epistemology, and
indeed to a self.


But can define a notion of 3-self in arithmetic. Then to get the 1- 
self, we go at the meta-level and combine it with the notion of  
arithmetical truth. That notion is NOT definable in arithmetic, but  
that is a good thing, because it will explain why the notion of first  
person, and of consciousness, will not be definable by machine.






Hence it appears that some perspectival union of
epistemology and ontology is a prerequisite of interpretation.


OK. But the whole force of comp comes from the fact that you can  
define a big part of that epistemology using only the elementary  
ontology.


Let us agree on what we mean by defining something in arithmetic (or  
in the arithmetical language).


The arithmetical language is the first order (predicate) logic with  
equality(=), so that it has the usual logical connectives (, V, -, ~  
(and, or, implies, not), and the quantifiers E and A, (it exists  
and for all), together with the special arithmetical symbols 0, s  
+ and *.


To illustrate an arithmetical definition, let me give you some  
definitions of simple concepts.


We can define the arithmetical relation  x = y (x is less than or  
equal to y).


Indeed x = y if and only if
Ez(x+z = y)

We can define x  y (x is strictly less than y) by
Ez((x+z) + s(0) = y)

We can define (x divide y) by
Ez(x*z = y)

Now we can define (x is a prime number) by

  Az[ (x ≠ 1) and ((z divide x) - ((z = 1) or (z = x))]

Which should be seen as a macro abbreviation of

Az(~(x = s(0))  ((Ey(x*y = x) - (z = 1) V (z = x)).

Now I tell you that we can define, exactly in that manner, the notion  
of universal number, computations, proofs, etc.


In particular any proposition of the form phi_i(j) = k can be  
translated in arithmetic. A famous predicate due to Kleene is used for  
that effect . A universal number u can be defined by the relation
AxAy(phi_u(x,y) = phi_x(y)), with x,y being a computable bijection  
from NXN to N.


Like metamathematics can be arithmetized, theoretical computer science  
can be arithmetized.


The interpretation is not done by me, but by the true relation between  
the numbers. 4  6 because it is true that Ez(s(s(s(s(0+z + s(0) =  
s(s(s(s(s(s(0)) ). That is true.  Such a z exists, notably  z =  
s(0).


Likewize, assuming comp, the reason why you are conscious here and  
now is that your relative computational state exists, together with  
the infinitely many computations going through it.
Your consciousness is harder to tackle, because it will refer more  
explicitly on that truth, like in the Bp  p Theatetical trick.


I do not need an extra God or observer of arithmetical truth, to  
interpret some number relation as computations, because the numbers,  
relatively to each other, already do that task. From their view, to  
believe that we need some extra-interpreter, would be like to believe  
that if your own brain is not observed by someone, it would not be  
conscious.


Let me say two or three words on the SELF.  Basically, it is very  
simple. You don't need universal numbers, nor super rich environment.  
You need an environment (machine, number) capable of duplicating, or  
concatenating piece of code. I usually sing this: If D(x) gives the  
description of x(x), then D(D) gives the description of DD. This  
belongs to the diagonalization family, and can be used to proves the  
existence of programs 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 16 Jan 2012, at 17:08, John Clark wrote:

On Mon, Jan 16, 2012 at 5:39 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


   Consciousness theories are easy because there are no facts they  
need to explain



 What? With comp, not only you have to explain the qualia

With ANY theory of consciousness you have to explain qualia,


Correct.




and every consciousness theory does as well or as badly as any other  
in doing that.



So you believe that the theory according to which consciousness is a  
gift by a creationist God is as bad as the theory according to which  
consciousness is related to brain activity?






 but it has been proved that you have to explain the quanta as well,

I don't know what that means.


It means that
1) the quanta does not exist primitively but emerge, in the comp case,  
from number relations.
2) that physicalism is false, and that you have to derive the physical  
laws from those number relations. More exactly you have to derive the  
beliefs in the physical laws from those number relations.






 and this without assuming a physical reality.

 But I do know that assuming reality does not seem to be a totally  
outrageous assumption.


Sure. But I was talking on the assumption of a primitively physical  
reality. That is shown, by the UD Argument, not be working when we  
assume that we are digitalisable machine. It is not outrageous, it is  
useless, non sensical, wrong with the usual Occam razor, in the same  
sense that it is wrong that invisible horse pulling cars would be the  
real reason why car moves.





 but there is an astronomical number of things that need to be  
explained to understand how intelligence works.


Not really. It is just that [...]

If you know how intelligence works you can make a super intelligent  
computer right now and you're well on your way to becoming a  
trillionaire. It seems to me that when discussing this very complex  
subject people use the phrase it's just a bit too much.


You seem quite unfair. I was saying, in completo: It is just that  
intelligent things organize themselves in non predictable way, at  
all. The it just was not effective, and that was my point!


This means that indeed we can write simple program leading to  
intelligence, but I can hardly be trillionnaire with that because they  
might need incompressible long time to show intelligence. Better to  
use nature's trick to copy from what has already been done. My whole  
point is that intelligence is not a constructive concept, like  
consciousness you cannot define it. You can define competence, and  
competence leads  already itself to many non constructive notions and  
comparisons.  The details are tricky and there is a very large  
litterature in theoretical artificial intelligence and learning  
theories.


Simple programs leading to intelligence are grow, diverse, and  
multiply as much as possible in big but finite environment. or help  
yourself, etc.  The UD can also be see as a little programs leading  
to the advent of intelligence (assuming mechanism), but not in a  
necessarily tractable way.


We discuss in a context where the goal is not to do artificial  
intelligence engineering, but the goal is to find a theory of  
everything, including persons, consciousness, etc.







intelligent things organize themselves in non predictable way, at  
all. The basic are simple (addition and multiplication)


That's like saying I know how to cure cancer, it's basically simple,  
just arrange the atoms in cancer cells so that they are no longer  
cancerous.  It's easy to learn the fundamentals of Chess, the rules  
of the game, but that does not mean you understand all the  
complexities and subtleties of it and are now a grandmaster.


OK. That was  my point. I never pretended to even know what  
intelligence really is. You should not mock the trivial points I make,  
because they are used in a non completely trivial way to show that the  
assumption of mechanism makes physics a branch of number theory (which  
is a key point in the search of a theory of everything). A reasoning  
made clear = a succession of trivial points.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 11:50 AM, Craig Weinberg whatsons...@gmail.comwrote:

  I think that I have a workable and useful notion of zombie. [...]
 Zombie describes something which seems like it could be conscious from the
 outside (ie to a human observer) but actually is not.


As I have absolutely no way of directly determining if a zombie is actually
conscious or actually is not then despite your claim your notion is
neither workable or useful.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

Hi,

My $.02. I am reminded of the argument in Matrix Philosophy that if 
we cannot argue that our experiences are *not* simulations then we might 
as well bet that they are. While I have found that there are upper 
bounds on computational based content via logical arguments such as 
David Deutsch'sCANTGOTO 
http://en.wikipedia.org/wiki/The_Fabric_of_Reality and Carlton Caves 
http://arxiv.org/abs/quant-ph/0304083' research on computational 
resources, it seems to me that we have sufficient evidence to argue that 
if it is possible for a being to have 1p associated with it, then we 
might as well bet that they do. So I am betting that zombies do not 
exist. One simply cannot remain an agnostic on this issue.


Onward!

Stephen


On 1/16/2012 1:27 PM, John Clark wrote:
On Mon, Jan 16, 2012 at 11:50 AM, Craig Weinberg 
whatsons...@gmail.com mailto:whatsons...@gmail.com wrote:


  I think that I have a workable and useful notion of zombie.
[...]  Zombie describes something which seems like it could be
conscious from the outside (ie to a human observer) but actually
is not.


As I have absolutely no way of directly determining if a zombie is 
actually conscious or actually is not then despite your claim your 
notion is neither workable or useful.


 John K Clark


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 1:08 PM, Bruno Marchal marc...@ulb.ac.be wrote:

So you believe that the theory according to which consciousness is a gift
 by a creationist God is as bad as the theory according to which
 consciousness is related to brain activity?


If creationists could explain consciousness then I would be a creationists,
but they can not. Brain activity does not explain consciousness either. I
don't know how but I believe as certainly as I believe anything that
intelligence causes consciousness. I believe this not because I can prove
it but because I simply could not function if I thought I was the only
conscious being in the universe.

 the quanta does not exist primitively but emerge, in the comp case, from
 number relations.


What sort of numbers, computable numbers or the far more common
non-computable numbers? And what sort of relations.?



 This means that indeed we can write simple program leading to
 intelligence


I don't know what that simple program could be, but I have already given a
example of a simple program leading to emotion.


 My whole point is that intelligence is not a constructive concept, like
 consciousness you cannot define it.


Intelligence is problem solving; not a perfect definition by any means but
far far better than any known definition of consciousness.  Examples are
better than definitions anyway, intelligence is what Einstein did and
consciousness is what I am.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
 Craig,

 Do you have an opinion regarding the possibility of Strong AI, and the
 other questions I posed in my earlier post?


Sorry Jason, I didn't see your comment earlier.

On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.comwrote:
  wrote:

Thought I'd throw this out there. If computationalism argues that
zombies can't exist,

   I think the two ideas zombies are impossible and computationalism are
   independent.  Where you might say they are related is that a disbelief in
   zombies yields a strong argument for computationalism.

  I don't think that it's possible to say that any two ideas 'are'
  independent from each other.

 Okay.  Perhaps 'independent' was not an ideal term, but computationalism is
 at least not dependent on an argument against zombies, as far as I am aware.

What computationlism does depend on though is the same view of
consciousness that zombies would disqualify.


  All ideas can be related through semantic
  association, however distant. As far as your point though, of course I
  see the opposite relation - while admitting even the possibility of
  zombies suggests computationalism is founded on illusion., but a
  disbelief in zombies gives no more support for computationalism than
  it does for materialism or panpsychism.

 If one accepts that zombies are impossible, then to reject computationalism
 requires also rejecting the possibility of Strong AI 
 (https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

What I'm saying is that if one accepts that zombies are impossible,
then to accept computationalism requires accepting that *all* AI is
strong already.












therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.

   I think there is a subtle difference in meaning between it is impossible
   to create something that acts like a person which is not a person and
   saying Zombies are not Turing emulable.  It is important to remember
  that
   the non-possibility of zombies doesn't imply a particular person or thing
   cannot be emulated, rather it means there is a particular consequence of
   certain Turing emulations which is unavoidable, namely the
   consciousness/mind/person.

  That's true, in the sense that emulable can only refer to a specific
  natural and real process being emulated rather than a fictional one.
  You have a valid point that the word emulable isn't the best term, but
  it's a red herring since the point I was making is that it would not
  be possible to avoid creating sentience in any sufficiently
  sophisticated cartoon, sculpture, or graphic representation of a
  person. Call it emulation, simulation, synthesis, whatever, the result
  is the same.

 I think you and I have different mental models for what is entailed by
 emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
 projections, and so on, don't necessarily compute anything (or at least,
 what they might depict as being computed can have little or no relation to
 what is actually computed by said cartoon, sculpture, recording,
 projection...  For actual computation you need counterfactuals conditions.
 A cartoon depicting an AND gate is not required to behave as a genuine AND
 gate would, and flashing a few frames depicting what such an AND gate might
 do is not equivalent to the logical decision of an AND gate.

I understand what you think I mean, but you're strawmanning my point.
An AND gate is a generalizable concept. We know that. It's logic can
be enacted in many (but not every) different physical forms. If we
built the Lego AND mechanism seen here: 
http://goldfish.ikaruga.co.uk/andnor.html#
and attached each side to a an effector which plays a cartoon of a
semiconductor AND gate, then you would have a cartoon which is
simulates an AND gate. The cartoon would be two separate cartoons in
reality, and the logic between them would be entirely inferred by the
audience, but this apparatus could be interpreted by the audience as a
functional simulation. The audience can jump to the conclusion that
the cartoon is a semiconductor AND gate. This is all that Strong AI
will ever be.

Computationalism assumes that consciousness is a generalizable
concept, but we don't know that is true. My view is that it is not
true, since we know that computation itself is not even generalizable
to all physical forms. You can't build a computer without any solid
materials. You can't build it out of uncontrollable living organisms.
There are physical constraints even on what can function as a simple
AND gate. It has no existence in a vacuum or a liquid or gas.

Just as basic logic functions are impossible under those ordinary
physically disorganized 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 1:42 pm, Stephen P. King stephe...@charter.net wrote:
 Hi,

      My $.02. I am reminded of the argument in Matrix Philosophy that if
 we cannot argue that our experiences are *not* simulations then we might
 as well bet that they are. While I have found that there are upper
 bounds on computational based content via logical arguments such as
 David Deutsch'sCANTGOTO
 http://en.wikipedia.org/wiki/The_Fabric_of_Reality and Carlton Caves
 http://arxiv.org/abs/quant-ph/0304083' research on computational
 resources, it seems to me that we have sufficient evidence to argue that
 if it is possible for a being to have 1p associated with it, then we
 might as well bet that they do. So I am betting that zombies do not
 exist. One simply cannot remain an agnostic on this issue.

 Onward!

 Stephen


I think the problem is that the zombie has the 1p of whatever is doing
the computation, not of the living cells and organs of a living
person. I think everything has a 1p experience, it's just that human
1p is a lot different from the 1p of out zoological, biological,
chemical, and physical subselves. A zombie talks the talk, but it
doesn't walk the walk. It's just a puppet which walks some other walk
which it has no awareness of.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

On 1/16/2012 2:02 PM, Craig Weinberg wrote:

On Jan 16, 1:42 pm, Stephen P. Kingstephe...@charter.net  wrote:

Hi,

  My $.02. I am reminded of the argument in Matrix Philosophy that if
we cannot argue that our experiences are *not* simulations then we might
as well bet that they are. While I have found that there are upper
bounds on computational based content via logical arguments such as
David Deutsch'sCANTGOTO
http://en.wikipedia.org/wiki/The_Fabric_of_Reality  and Carlton Caves
http://arxiv.org/abs/quant-ph/0304083' research on computational
resources, it seems to me that we have sufficient evidence to argue that
if it is possible for a being to have 1p associated with it, then we
might as well bet that they do. So I am betting that zombies do not
exist. One simply cannot remain an agnostic on this issue.

Onward!

Stephen


I think the problem is that the zombie has the 1p of whatever is doing
the computation, not of the living cells and organs of a living
person. I think everything has a 1p experience, it's just that human
1p is a lot different from the 1p of out zoological, biological,
chemical, and physical subselves. A zombie talks the talk, but it
doesn't walk the walk. It's just a puppet which walks some other walk
which it has no awareness of.

Craig


Hi Craig,

The 1p is something that can have differences in degree not in kind 
thus your argument is a bit off. Zombies simply do not exist.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Question about PA and 1p

2012-01-16 Thread David Nyman
On 16 January 2012 18:08, Bruno Marchal marc...@ulb.ac.be wrote:

 I do not need an extra God or observer of arithmetical truth, to interpret
 some number relation as computations, because the numbers, relatively to
 each other, already do that task. From their view, to believe that we need
 some extra-interpreter, would be like to believe that if your own brain is
 not observed by someone, it would not be conscious.

I'm unclear from the above - and indeed from the rest of your comments
- whether you are defining interpretation in a purely 3p way, or
whether you are implicitly placing it in a 1-p framework - e.g. where
you say above From their view.  If you do indeed assume that numbers
can have such views, then I see why you would say that they interpret
themselves, because adopting the 1p view is already to invoke a kind
of emergence of number-epistemology.  But such an emergence is still
only a manner of speaking from OUR point of view, in that I can
rephrase what you say above thus: From their view, to believe that
THEY need some extra-interpreter... without taking such a point of
view in any literal sense.  Are you saying that consciousness somehow
elevates number-epistemology into strong emergence, such that their
point of view and self-interpretation become indistinguishable from my
own?

David


 On 16 Jan 2012, at 15:32, David Nyman wrote:

 On 16 January 2012 10:04, Bruno Marchal marc...@ulb.ac.be wrote:

 Actually you can define computation, even universal machine, by using
 only
 addition and multiplication. So universal machine exists in elementary
 arithmetic in the same sense as in the existence of prime number.


 That may be, but we were discussing interpretation.  As you say above:
 YOU can define computation, even universal machine, by using only
 addition and multiplication (my emphasis).


 Not just ME. A tiny part of arithmetic can too. All universal numbers can do
 that. No need of first person notion. All this can be shown in a 3p way.
 Indeed, in arithmetic. Even without the induction axioms, so that we don't
 need Löbian machine.
 The existence of the UD for example, is a theorem of (Robinson) arithmetic.
 Now, that kinds of truth are rather long and tedious to show. This was shown
 mainly by Gödel in his 1931 paper (for rich Löbian theories). It is called
 arithmetization of meta-mathematics. I will try to explain the salt of it
 without being too much technical below.




 But this is surely, as you
 are wont to say, too quick.  Firstly, in what sense can numbers in
 simple arithmetical relation define THEMSELVES as computation, or
 indeed as anything else than what they simply are?


 Here you ask a more difficult question. Nevertheless it admits a positive
 answer.




 I think that the
 ascription of self-interpretation to a bare ontology is superficial;
 it conceals an implicit supplementary appeal to epistemology, and
 indeed to a self.


 But can define a notion of 3-self in arithmetic. Then to get the 1-self, we
 go at the meta-level and combine it with the notion of arithmetical truth.
 That notion is NOT definable in arithmetic, but that is a good thing,
 because it will explain why the notion of first person, and of
 consciousness, will not be definable by machine.





 Hence it appears that some perspectival union of
 epistemology and ontology is a prerequisite of interpretation.


 OK. But the whole force of comp comes from the fact that you can define a
 big part of that epistemology using only the elementary ontology.

 Let us agree on what we mean by defining something in arithmetic (or in the
 arithmetical language).

 The arithmetical language is the first order (predicate) logic with
 equality(=), so that it has the usual logical connectives (, V, -, ~ (and,
 or, implies, not), and the quantifiers E and A, (it exists and for all),
 together with the special arithmetical symbols 0, s + and *.

 To illustrate an arithmetical definition, let me give you some definitions
 of simple concepts.

 We can define the arithmetical relation  x = y (x is less than or equal
 to y).

 Indeed x = y if and only if
 Ez(x+z = y)

 We can define x  y (x is strictly less than y) by
 Ez((x+z) + s(0) = y)

 We can define (x divide y) by
 Ez(x*z = y)

 Now we can define (x is a prime number) by

  Az[ (x ≠ 1) and ((z divide x) - ((z = 1) or (z = x))]

 Which should be seen as a macro abbreviation of

 Az(~(x = s(0))  ((Ey(x*y = x) - (z = 1) V (z = x)).

 Now I tell you that we can define, exactly in that manner, the notion of
 universal number, computations, proofs, etc.

 In particular any proposition of the form phi_i(j) = k can be translated in
 arithmetic. A famous predicate due to Kleene is used for that effect . A
 universal number u can be defined by the relation
 AxAy(phi_u(x,y) = phi_x(y)), with x,y being a computable bijection from
 NXN to N.

 Like metamathematics can be arithmetized, theoretical computer science can
 be arithmetized.

 The interpretation is not done by 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 2:22 pm, Stephen P. King stephe...@charter.net wrote:

 Hi Craig,

      The 1p is something that can have differences in degree not in kind
 thus your argument is a bit off. Zombies simply do not exist.

The degree of 1p is always qualitative though, that's how it's
different from 3p. This text is a zombie of my thoughts and
intentions. You see my meaning in it, but it has no meaning by itself.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

Hi Craig,

On that we agree.

Onward!

Stephen


On 1/16/2012 3:33 PM, Craig Weinberg wrote:

On Jan 16, 2:22 pm, Stephen P. Kingstephe...@charter.net  wrote:


Hi Craig,

  The 1p is something that can have differences in degree not in kind
thus your argument is a bit off. Zombies simply do not exist.

The degree of 1p is always qualitative though, that's how it's
different from 3p. This text is a zombie of my thoughts and
intentions. You see my meaning in it, but it has no meaning by itself.

Craig



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
  Craig,
 
  Do you have an opinion regarding the possibility of Strong AI, and the
  other questions I posed in my earlier post?
 

 Sorry Jason, I didn't see your comment earlier.

 On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
  On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
   wrote:
 
 Thought I'd throw this out there. If computationalism argues that
 zombies can't exist,
 
I think the two ideas zombies are impossible and computationalism
 are
independent.  Where you might say they are related is that a
 disbelief in
zombies yields a strong argument for computationalism.
 
   I don't think that it's possible to say that any two ideas 'are'
   independent from each other.
 
  Okay.  Perhaps 'independent' was not an ideal term, but computationalism
 is
  at least not dependent on an argument against zombies, as far as I am
 aware.

 What computationlism does depend on though is the same view of
 consciousness that zombies would disqualify.

 
   All ideas can be related through semantic
   association, however distant. As far as your point though, of course I
   see the opposite relation - while admitting even the possibility of
   zombies suggests computationalism is founded on illusion., but a
   disbelief in zombies gives no more support for computationalism than
   it does for materialism or panpsychism.
 
  If one accepts that zombies are impossible, then to reject
 computationalism
  requires also rejecting the possibility of Strong AI (
 https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

 What I'm saying is that if one accepts that zombies are impossible,
 then to accept computationalism requires accepting that *all* AI is
 strong already.


Strong AI is an AI capable of any task that a human is capable of.  I am
not aware of any AI that fits this definition.



 
 
 
 
 
 
 
 
 
 
 
 therefore anything that we cannot distinguish
 from a conscious person must be conscious, that also means that it
 is
 impossible to create something that acts like a person which is
 not a
 person. Zombies are not Turing emulable.
 
I think there is a subtle difference in meaning between it is
 impossible
to create something that acts like a person which is not a person
 and
saying Zombies are not Turing emulable.  It is important to
 remember
   that
the non-possibility of zombies doesn't imply a particular person or
 thing
cannot be emulated, rather it means there is a particular
 consequence of
certain Turing emulations which is unavoidable, namely the
consciousness/mind/person.
 
   That's true, in the sense that emulable can only refer to a specific
   natural and real process being emulated rather than a fictional one.
   You have a valid point that the word emulable isn't the best term, but
   it's a red herring since the point I was making is that it would not
   be possible to avoid creating sentience in any sufficiently
   sophisticated cartoon, sculpture, or graphic representation of a
   person. Call it emulation, simulation, synthesis, whatever, the result
   is the same.
 
  I think you and I have different mental models for what is entailed by
  emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
  projections, and so on, don't necessarily compute anything (or at least,
  what they might depict as being computed can have little or no relation
 to
  what is actually computed by said cartoon, sculpture, recording,
  projection...  For actual computation you need counterfactuals
 conditions.
  A cartoon depicting an AND gate is not required to behave as a genuine
 AND
  gate would, and flashing a few frames depicting what such an AND gate
 might
  do is not equivalent to the logical decision of an AND gate.

 I understand what you think I mean, but you're strawmanning my point.
 An AND gate is a generalizable concept. We know that. It's logic can
 be enacted in many (but not every) different physical forms. If we
 built the Lego AND mechanism seen here:
 http://goldfish.ikaruga.co.uk/andnor.html#


This page did not load for me..


 and attached each side to a an effector which plays a cartoon of a
 semiconductor AND gate, then you would have a cartoon which is
 simulates an AND gate. The cartoon would be two separate cartoons in
 reality, and the logic between them would be entirely inferred by the
 audience, but this apparatus could be interpreted by the audience as a
 functional simulation. The audience can jump to the conclusion that
 the cartoon is a semiconductor AND gate. This is all that Strong AI
 will ever be.

 Computationalism assumes that consciousness is a generalizable
 concept, but we don't know that is true. My view is that it is not
 true, since we know that computation itself is not even generalizable
 to all 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 10:26 pm, Jason Resch jasonre...@gmail.com wrote:
 On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.comwrote:

  On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
   Craig,

   Do you have an opinion regarding the possibility of Strong AI, and the
   other questions I posed in my earlier post?

  Sorry Jason, I didn't see your comment earlier.

  On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
   On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.com
  wrote:
wrote:

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist,

 I think the two ideas zombies are impossible and computationalism
  are
 independent.  Where you might say they are related is that a
  disbelief in
 zombies yields a strong argument for computationalism.

I don't think that it's possible to say that any two ideas 'are'
independent from each other.

   Okay.  Perhaps 'independent' was not an ideal term, but computationalism
  is
   at least not dependent on an argument against zombies, as far as I am
  aware.

  What computationlism does depend on though is the same view of
  consciousness that zombies would disqualify.

All ideas can be related through semantic
association, however distant. As far as your point though, of course I
see the opposite relation - while admitting even the possibility of
zombies suggests computationalism is founded on illusion., but a
disbelief in zombies gives no more support for computationalism than
it does for materialism or panpsychism.

   If one accepts that zombies are impossible, then to reject
  computationalism
   requires also rejecting the possibility of Strong AI (
 https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

  What I'm saying is that if one accepts that zombies are impossible,
  then to accept computationalism requires accepting that *all* AI is
  strong already.

 Strong AI is an AI capable of any task that a human is capable of.  I am
 not aware of any AI that fits this definition.

What I'm saying though is that computationalism implies that whatever
task is being done by AI that a human can also do is the same. If AI
can print the letters 'y-e-s', then it must be no different from
person answering yes. What I'm saying is that makes all AI strong,
just incomplete.




  therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that it
  is
  impossible to create something that acts like a person which is
  not a
  person. Zombies are not Turing emulable.

 I think there is a subtle difference in meaning between it is
  impossible
 to create something that acts like a person which is not a person
  and
 saying Zombies are not Turing emulable.  It is important to
  remember
that
 the non-possibility of zombies doesn't imply a particular person or
  thing
 cannot be emulated, rather it means there is a particular
  consequence of
 certain Turing emulations which is unavoidable, namely the
 consciousness/mind/person.

That's true, in the sense that emulable can only refer to a specific
natural and real process being emulated rather than a fictional one.
You have a valid point that the word emulable isn't the best term, but
it's a red herring since the point I was making is that it would not
be possible to avoid creating sentience in any sufficiently
sophisticated cartoon, sculpture, or graphic representation of a
person. Call it emulation, simulation, synthesis, whatever, the result
is the same.

   I think you and I have different mental models for what is entailed by
   emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
   projections, and so on, don't necessarily compute anything (or at least,
   what they might depict as being computed can have little or no relation
  to
   what is actually computed by said cartoon, sculpture, recording,
   projection...  For actual computation you need counterfactuals
  conditions.
   A cartoon depicting an AND gate is not required to behave as a genuine
  AND
   gate would, and flashing a few frames depicting what such an AND gate
  might
   do is not equivalent to the logical decision of an AND gate.

  I understand what you think I mean, but you're strawmanning my point.
  An AND gate is a generalizable concept. We know that. It's logic can
  be enacted in many (but not every) different physical forms. If we
  built the Lego AND mechanism seen here:
 http://goldfish.ikaruga.co.uk/andnor.html#

 This page did not load for me..

Weird. Can you see a pic from it? 
http://goldfish.ikaruga.co.uk/legopics/newand11.jpg


  and attached each side to a an effector which plays a cartoon of a
  semiconductor AND gate, then you would have a cartoon which is
  simulates an AND gate. The cartoon would be two separate cartoons in
  reality, and the 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 10:26 pm, Jason Resch jasonre...@gmail.com wrote:
  On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
 
   On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
Craig,
 
Do you have an opinion regarding the possibility of Strong AI, and
 the
other questions I posed in my earlier post?
 
   Sorry Jason, I didn't see your comment earlier.
 
   On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg 
 whatsons...@gmail.com
   wrote:
 wrote:
 
   Thought I'd throw this out there. If computationalism argues
 that
   zombies can't exist,
 
  I think the two ideas zombies are impossible and
 computationalism
   are
  independent.  Where you might say they are related is that a
   disbelief in
  zombies yields a strong argument for computationalism.
 
 I don't think that it's possible to say that any two ideas 'are'
 independent from each other.
 
Okay.  Perhaps 'independent' was not an ideal term, but
 computationalism
   is
at least not dependent on an argument against zombies, as far as I am
   aware.
 
   What computationlism does depend on though is the same view of
   consciousness that zombies would disqualify.
 
 All ideas can be related through semantic
 association, however distant. As far as your point though, of
 course I
 see the opposite relation - while admitting even the possibility of
 zombies suggests computationalism is founded on illusion., but a
 disbelief in zombies gives no more support for computationalism
 than
 it does for materialism or panpsychism.
 
If one accepts that zombies are impossible, then to reject
   computationalism
requires also rejecting the possibility of Strong AI (
  https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).
 
   What I'm saying is that if one accepts that zombies are impossible,
   then to accept computationalism requires accepting that *all* AI is
   strong already.
 
  Strong AI is an AI capable of any task that a human is capable of.  I am
  not aware of any AI that fits this definition.

 What I'm saying though is that computationalism implies that whatever
 task is being done by AI that a human can also do is the same. If AI
 can print the letters 'y-e-s', then it must be no different from
 person answering yes. What I'm saying is that makes all AI strong,
 just incomplete.

 
 
 
   therefore anything that we cannot distinguish
   from a conscious person must be conscious, that also means
 that it
   is
   impossible to create something that acts like a person which is
   not a
   person. Zombies are not Turing emulable.
 
  I think there is a subtle difference in meaning between it is
   impossible
  to create something that acts like a person which is not a
 person
   and
  saying Zombies are not Turing emulable.  It is important to
   remember
 that
  the non-possibility of zombies doesn't imply a particular person
 or
   thing
  cannot be emulated, rather it means there is a particular
   consequence of
  certain Turing emulations which is unavoidable, namely the
  consciousness/mind/person.
 
 That's true, in the sense that emulable can only refer to a
 specific
 natural and real process being emulated rather than a fictional
 one.
 You have a valid point that the word emulable isn't the best term,
 but
 it's a red herring since the point I was making is that it would
 not
 be possible to avoid creating sentience in any sufficiently
 sophisticated cartoon, sculpture, or graphic representation of a
 person. Call it emulation, simulation, synthesis, whatever, the
 result
 is the same.
 
I think you and I have different mental models for what is entailed
 by
emulation, simulation, synthesis.  Cartoons, sculptures,
 recordings,
projections, and so on, don't necessarily compute anything (or at
 least,
what they might depict as being computed can have little or no
 relation
   to
what is actually computed by said cartoon, sculpture, recording,
projection...  For actual computation you need counterfactuals
   conditions.
A cartoon depicting an AND gate is not required to behave as a
 genuine
   AND
gate would, and flashing a few frames depicting what such an AND gate
   might
do is not equivalent to the logical decision of an AND gate.
 
   I understand what you think I mean, but you're strawmanning my point.
   An AND gate is a generalizable concept. We know that. It's logic can
   be enacted in many (but not every) different physical forms. If we
   built the Lego AND mechanism seen here:
  http://goldfish.ikaruga.co.uk/andnor.html#
 
  This page did not load for me..

 Weird. Can you see a pic from it?
 http://goldfish.ikaruga.co.uk/legopics/newand11.jpg


Weird.  I was