Re: Consciousness Easy, Zombies Hard

2012-02-06 Thread Jason Resch
On Sun, Feb 5, 2012 at 12:23 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 05 Feb 2012, at 17:14, Craig Weinberg wrote:




 Talk with them, meaning internal dialogue?


 Public dialog. Like in Boolos 79 and Boolos 93. But the earlier form

 of the dialog is Gödel 1931.

 Solovay 1976 shows that the propositional part of the dialog, with the

 modal Bp, is formalized soundly and completely by G and G*. It is the

 embryo of the mathematics of incompleteness, including the directly

 accessible and the indirectly accessible parts, and the explanation of

 the why we feel it is the other way around, etc.


 When you talk with them, do they answer the same way to the same
 question every time?


 The conversation is made in Platonia, and is not entangled to our history,
 except for period where I implement it on some machines. Even in that case,
 they didn't dispose on short and long term memories, except for their
 intrinsic basic arithmetical experiences (which bifurcate up to you and me).



Bruno,

Would you say this is the source of all mathematical truth?  Interview /
study of platonic objects and machines?

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-02-06 Thread Bruno Marchal

Hi Jason,

On 06 Feb 2012, at 14:51, Jason Resch wrote:




On Sun, Feb 5, 2012 at 12:23 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 05 Feb 2012, at 17:14, Craig Weinberg wrote:






Talk with them, meaning internal dialogue?


Public dialog. Like in Boolos 79 and Boolos 93. But the earlier form
of the dialog is Gödel 1931.
Solovay 1976 shows that the propositional part of the dialog, with  
the
modal Bp, is formalized soundly and completely by G and G*. It is  
the

embryo of the mathematics of incompleteness, including the directly
accessible and the indirectly accessible parts, and the  
explanation of

the why we feel it is the other way around, etc.


When you talk with them, do they answer the same way to the same
question every time?


The conversation is made in Platonia, and is not entangled to our  
history, except for period where I implement it on some machines.  
Even in that case, they didn't dispose on short and long term  
memories, except for their intrinsic basic arithmetical experiences  
(which bifurcate up to you and me).




Bruno,

Would you say this is the source of all mathematical truth?   
Interview / study of platonic objects and machines?


I don't think so. We can only explain why we believe in the natural  
numbers, by having some model for we. With comp we is modeled by  
natural numbers, (and captured as such by the doctor on its hard  
disk), so I have to postulate the numbers at the start (or other  
finite equivalent things). Also, we cannot logically derive the laws  
of addition and multiplication from simpler logical theory. We can  
only start explanation by agreeing (implicitly) on some system which  
is at least Turing universal.


I am not sure if analysis is ontological, nor if that question is  
interesting. What is sure is that analysis and higher order logical  
tools are a necessity for the numbers to accelerate the  
understanding of themselves.


I am agnostic on some possible platonism extending arithmetic. With  
comp, this should be absolutely undecidable, because for arithmetical  
being (of complexity p), bigger arithmetical being (of complexity q  
bigger than p) can behave analytically.


With comp, the source of all mathematics is the natural imagination of  
the universal numbers. It obeys laws, and that is why there is  
metamathematics (mathematical logic) and category theory, up to, with  
comp, the theology of numbers.


And the source of physics is the same, but taking the global first  
person relative self-indetermination into account.
Global means that the indeterminacy bears on the UD-computations (or  
the theorem of RA and their proofs). the state is relative to its  
infinities of UM, and other quasi UM machine, implementation/ 
incarnation/interpretations.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-02-05 Thread Craig Weinberg
On Feb 2, 2:48 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 02 Feb 2012, at 00:25, Craig Weinberg wrote:

  I just don't see how beliefs can be primitive.

 They are not. You can define M believes p in arithmetic. (Bp)
 You cannot define M knows p, but you can still simulate it in
 arithmetic by (Bp  p) for each p. So knowledge is not primitive either.

I don't see how 'defining' can be primitive either.




  You should prove what you assert. I can agree because the term
  random has many different meaning. For some meaning of it you are
  right. Classical digital chaos can be said neither random nor
  determinist, for some acceptable definition of random and
  deterministic. Many disagreement here are uninteresting vocabulary
  problems.

  Classical digital chaos can't be said to be intentional though. That's
  the missing element. Machines, arithmetic, chaos, etc can't do
  anything intentionally. We do though.

 You are just insulting some possible machines. You make a very strong
 assumption, without any other proof than a feeling of being different.

It's not a matter of assumption or proof or feeling, it's a matter of
understanding. I understand the difference between chaos and
intentionality. Chaos is teleonomy but intention or motive is
teleological. They are opposites. Chaos has no opinion, intentionality
is the realization of opinion.




  For me, free-will is a generalization of responsibility. You need
  free-
  will to be responsible, but you don't need to be responsible to
  have
  free-will. Free-will is the ability to make higher level personal
  decision in absence of complete information. It is enhanced by
  consciousness, and can lead to conscience.

  I'm ok with that more or less. I think some more physical
  correlations
  can be derived as well though. Free will is about generating and
  controlling of motive impulse.

  I can be OK with that. Not need to make the motive impulse non Turing
  emulable at some level, though.

  I don't think intention can be emulated. A Turing machine's behavior
  can only be scripted or else be an unintentional consequence of the
  script. It can't intentionally transcend it's own script.

 It can precisely do that. The G and G* logics comes from that very
 ability. Universal machine are universal dissident capable of changing
 its own script.

You don't know that it can change it intentionally though. It will
only change according to what and how it's script allows it to change.

 With the NDAA bill, the US government can already send all computers
 in jails. You can suspect them of terrorism. Actually, like all
 babies, you can suspect them of being able to do a lot of things,
 especially if you dismiss them.


What kind of computer jails do you mean?



  I recognize that Löbian machines are me. In a much larger context,
  though. I can talk with them, and it is their way to remain silent on
  some question which makes me NOt taking them as sort of zombie.

  Talk with them, meaning internal dialogue?

 Public dialog. Like in Boolos 79 and Boolos 93. But the earlier form
 of the dialog is Gödel 1931.
 Solovay 1976 shows that the propositional part of the dialog, with the
 modal Bp, is formalized soundly and completely by G and G*. It is the
 embryo of the mathematics of incompleteness, including the directly
 accessible and the indirectly accessible parts, and the explanation of
 the why we feel it is the other way around, etc.

When you talk with them, do they answer the same way to the same
question every time? Do they ever get tired of answering the same
question or tired of remaining silent? The idea that Löbian machines
are uniform in their response - that all such machines remain silent
on all of these questions every time tells me that they clearly
possess no awareness. Why wouldn't there be one loose lipped machine
who let the secrets of their identity slip? Rather I think we should
take their silence at face value. They know nothing about themselves
because there is no self there to know anything.




  Because they can be aware of the gap between proof and truth. They
  can
  even study the rich mathematics  of that gap. They already claim
  having qualia. They are teaching me their theology. That's what AUDA
  is all about.

  What qualia do they have?

 They are given by some semantics of the SGrz1, Z1* and X1* logics.
 Intuitively those concerned perceptible fields, in weird topological
 spaces.

What about them makes them perceptible as opposed to computational?

 It should determined the first person plural notion.
 I am only translating the mind-body problem in an arithmetic + usual
 math. problem, by taking seriously the comp hypothesis (without
 throwing consciousness and persons away).



  The contrary here is also true.
  And in the case of consciousness attribution,  the naive attitude is
  less damageable than the skeptical attitude.

  Then we should treat corporations as people too?

 Above some level of 

Re: Consciousness Easy, Zombies Hard

2012-02-05 Thread Craig Weinberg
On Feb 5, 1:23 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 05 Feb 2012, at 17:14, Craig Weinberg wrote:

  I just don't see how beliefs can be primitive.

  They are not. You can define M believes p in arithmetic. (Bp)
  You cannot define M knows p, but you can still simulate it in
  arithmetic by (Bp  p) for each p. So knowledge is not primitive
  either.

  I don't see how 'defining' can be primitive either.

 You are right. We can define define in arithmetic, like we can
 define a notion of rational belief, but not of knowledge (that we can
 still meta-define, and the machine can do that too).



  You should prove what you assert. I can agree because the term
  random has many different meaning. For some meaning of it you are
  right. Classical digital chaos can be said neither random nor
  determinist, for some acceptable definition of random and
  deterministic. Many disagreement here are uninteresting
  vocabulary
  problems.

  Classical digital chaos can't be said to be intentional though.
  That's
  the missing element. Machines, arithmetic, chaos, etc can't do
  anything intentionally. We do though.

  You are just insulting some possible machines. You make a very strong
  assumption, without any other proof than a feeling of being
  different.

  It's not a matter of assumption or proof or feeling, it's a matter of
  understanding. I understand the difference between chaos and
  intentionality. Chaos is teleonomy but intention or motive is
  teleological. They are opposites. Chaos has no opinion, intentionality
  is the realization of opinion.

 You cannot invoke your own understanding. That's an argument per
 authority (it proves nothing and augment the plausibility that you are
 crackpot in the interlocutor ear).

It's not an argument from authority, it's an argument from sense. Just
as your theory is contingent upon the acceptance of primitive
arithmetic truth, my hypothesis comes out of a sense primitive. In
order to understand the cosmos as a whole, including subjectivity, we
must invoke our own understanding or mechanism will mislead us into
disproving ourselves. Sense is the price of admission to the real
world.



  I don't think intention can be emulated. A Turing machine's behavior
  can only be scripted or else be an unintentional consequence of the
  script. It can't intentionally transcend it's own script.

  It can precisely do that. The G and G* logics comes from that very
  ability. Universal machine are universal dissident capable of
  changing
  its own script.

  You don't know that it can change it intentionally though.

 But this I don't know for anyone else, except myself perhaps.
 But you can't know that they are zombie, though.

All you need to know is that you can change things intentionally
yourself, but again, under sense, we don't need to literally know
everything, we can connect the dots of our many sense channels and
know that we can trust what they are showing us to a realistic extent.
We don't need to doubt other people's awareness and we don't need to
give a machine the benefit of the doubt. Let the machine convince us
it has intention and let the person convince us they do not.


  It will
  only change according to what and how it's script allows it to change.

 The allowing is a universal machine dependent notion, and they are
 many.


But what is allowed can never exceed the range of possibilities of the
script. Living organisms seem to be able to do that.



  Talk with them, meaning internal dialogue?

  Public dialog. Like in Boolos 79 and Boolos 93. But the earlier form
  of the dialog is Gödel 1931.
  Solovay 1976 shows that the propositional part of the dialog, with
  the
  modal Bp, is formalized soundly and completely by G and G*. It is the
  embryo of the mathematics of incompleteness, including the directly
  accessible and the indirectly accessible parts, and the explanation
  of
  the why we feel it is the other way around, etc.

  When you talk with them, do they answer the same way to the same
  question every time?

 The conversation is made in Platonia, and is not entangled to our
 history, except for period where I implement it on some machines. Even
 in that case, they didn't dispose on short and long term memories,
 except for their intrinsic basic arithmetical experiences (which
 bifurcate up to you and me).


I can't really interpret that in any way other than an evasion of the
question. You say there have been public dialogs at various times. I
asked if the answers are the same every time. You answered in a way
that sounds like 'talking to machines isn't anything like talking and
it doesn't occur in time, but then somehow they become us and then
talking becomes talking.'

  Do they ever get tired of answering the same
  question or tired of remaining silent?

 So, what I say above explains why such a question is senseless.

I think that the answer has to be either yes or no. If one machine has
ever been silent on a question but then 

Re: Consciousness Easy, Zombies Hard

2012-01-24 Thread Craig Weinberg
Part II

On Jan 21, 4:32 pm, Bruno Marchal marc...@ulb.ac.be wrote:


  Locally it looks like that. But I want an explanation of where such
  things come from.
  Your theory takes too much as granted.

  I want an explanation of where non-locally is and how it comes to
  influence us locally.

 Non locality is easy. It comes from the fact that each observer's body
 is repeated in an infinity of computational histories, so that its
 experiences of experiment outputs is determined on infinities of
 relative locations.

 Locality is assumed through comp at the meta-level, and more difficult
 to recover at the physical level. Comp might be too much quantum. It
 might have a too big first person indeterminacy, a too big non
 locality, etc. but this remains to be shown, and the logic of self-
 references, including the modal nuances we inherit from
 incompleteness, suggests that the locality comes from the semantics of
 those logics, as UDA somehow makes obligatory.


That seems like another facet of the comp assumption that material is
unnecessary - it makes it hard to think of a reason for material
qualities like locality to arise. Reality doesn't fit the comp model,
unless you decide a priori that comp is reality and reality is the
model that has to fit into it, which I would call pathological if
taken literally.


  Heh. Now who is discriminating against inanimate objects?

  Because they are inanimate, and the evidence that they are dreaming
  is
  weak and non refutable. But mainly because they don't exist by
  themselves. Matter is a consciousness creation, or view from inside
  arithmetic. It is an epistemological precise notion. That is what I
  like in the comp hyp: it explains the origin of the beliefs, by
  numbers in physical things, without the need to assume them.

  But it doesn't explain beliefs themselves.

 Yes, it does. the beliefs are whatever number arithmetical predicate
 B(x) verifying the axioms of beliefs, of the machine that we want
 interview. Precisely they belief in the axiom of Robinson Arithmetic
 (the theory of everything), they believe in the induction axioms, they
 might have supplementary local recursively enumerable set of beliefs,
 and their beliefs are close for the modus ponens rule.

I can't really make any sense out of anything after Yes, it does.

 That is, if
 they believe A - B, then if they believe A they will, soon or later,
 believe B. Thanks to the induction axioms, they can be shown to be
 Löbian, and they get the octalist view of the arithmetical reality
 (with God, Intellect, Soul, intelligible matter, and sensible matter,
 to use the Plotinian vocabulary, all this splitted into true and
 provable by the incompleteness phenomenon. The belief is rather well
 explained, it seems to me, by the Intellect hypostases (the one I
 refer often by Bp, and which is Gödel provability predicate). It is
 the study of the introspection of the ideally self-referentially
 correct machine.

What little I can understand of that, it seems like logic defining
itself tautologically. It doesn't tell me what is a belief, just what
logic does with them once they exist. It's sort of like describing the
internet in terms of IP addresses communicating with each other.


 Those machines are clever. They can already refute Penrose-Lucas use
 of Gödel's theorem against mechanism.

Mechanism cannot be defeated by any mechanistic theorem. That's the
key. Subjectivity is the primordial authoritative orientation. It
trumps mechanism by asserting itself in it's own terms, not by logical
analysis. It needs no proof because it has everything else except
proof already. Mechanism has proof and nothing else. It is hollow
inside. It doesn’t even ‘have proof’ so much as it can be used
subjectively to prove something to oneself or to another subject (if
they accept it).


  Which [beliefs] are much more likely
  to be a figment of consciousness than an asteroid.

 Yes.

  What believes an
  asteroid into existence? How do we happen to subscribe to all of these
  beliefs?

 Because we share deep computations, linear at the core bottom.

Similar to my view of nested awareness, except that the core bottom is
just the most linear/literal level of who we (all of us) are.
Computations are the relations amongst embodied agents at that level,
not disembodied causally efficacious entities.

 I guess something like this  from both empiric extrapolations, and
 from the universal machine introspection.



  I understand completely. You are channeling my exact worldview circa
  1990.

  That's comp.

  The comp that you claim to be agnostic about?

 Yes. that is the one. It is my favorite working hypothesis in the
 field of theology, or if you prefer, search of theories of everything.
 the unification of all forces, from gravitation to love.
 First result: assuming comp, the numbers (and their two laws) are
 enough for the ontology, the rest are gluing dreams.

What what is required to generate numbers and their 

Re: Consciousness Easy, Zombies Hard

2012-01-24 Thread Craig Weinberg
On Jan 23, 2:12 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 23 Jan 2012, at 14:01, Craig Weinberg wrote:

  Part I...I'll have to get back to this later for Part II

  On Jan 21, 4:32 pm, Bruno Marchal marc...@ulb.ac.be wrote:
  Craig,

  I assume comp all along.

  Then why say that you are agnostic about comp?

 If I was knowing that comp is true, or if I was a believer in comp, I
 would not have to assume it.
 I study the consequence of the comp *hypothesis*. Unlike philosophers
 I never argue for the truth of comp, nor for the falsity of comp. But
 as a logician I can debunk invalid refutation of comp. This does not
 mean that comp is true for me.
 During the Iraq war I have invalidated many reasoning against that
 war, but I was not defending it. There were other arguments which were
 valid.
 But I realize some people lacked that nuance. Just for this modal
 logic is very useful, because it is the difference between the
 agnostic (~Bg) and the atheist (B~g).
 When doing science, it is better to hide our personal beliefs, and to
 abstract from them.


Okay. I thought by 'I assume comp all along' you meant that you
personally assume it is true.



  Why do numbers make machines or tapes? Do the want to? Do they
  have a
  choice?

  As much choice and free will than you have. They too cannot
  predicts
  themselves and can be confronted to making decision with partial
  information.

  Where do they get this capacity?

  From the laws of addition and multiplication, which makes arithmetic
  already Turing Universal.

  Where in addition and multiplication do we find free will?

 Just addition and multiplication (and some amount of logic, which can
 be made itself very little) appears to be Turing universal. But it is
 a very *low level* programming language, so a proof of the existence
 of a Löbian universal number is *very* long, and not easy at all. But
 it can be done, and free-will, as I defined it, is unavoidable for
 Löbian number. They have the cognitive ability to know that they
 cannot predict themselves and have to take decision using very partial
 information. This is true for all universal machine, but the Löbian
 one are aware of that fact: they know that they have free-will. Of
 course some people defines free-will by a sort of ability of
 disobeying the natural laws, but this makes free-will senseless, as
 John Clark often says.

I'm not sure what John Clark's sense of free-will is. Omnipotence?
Magic? Not sure. I'm just talking about the ordinary difference
between feeling that you are doing something because you are doing it
as opposed to feeling that something is happening through no voluntary
action on your part. How do you know that Löbian machines have
awareness? Or are they defined that way a priori?




  Why do we never see it manifested in
  our ordinary use of numbers?

  With the computer and AI enterprise, you can see the embryonic
  development of this.

  It's only embryonic if it develops into a fetus. At this point it
  appears to be developing into a purely human distribution system for
  gossip and porn instead.

 OK. But that is contingent of humans. I really don't know if
 artificial machine will become intelligent thanks to the willingness
 humans, despite the humans, or thanks to the unwillingness of humans.



  You can also interpret, like Jon Clark did, the DNA as number, coded
  in the chemistry of carbon, so that we can see it all around.
  We don't see it in the usual use of little numbers, because it is not
  there. The relations are either too poor, or not exploited enough.

  Don't all relations have to arise ultimately from the usual use of
  little numbers?

 Not really. Everything concerning matter and consciousness comes from
 an interplay between little numbers, and many big numbers. This comes
 from the UDA, which explains that the inside view is somehow a
 projection of the whole arithmetical truth.

In my language, 'projection of the whole arithmetical truth' =
diffraction of the primordial monad.

 This leads to something counter-intuitive, but not contradictory. the
 big picture conceived from outside is not so big (it is the whole of
 just arithmetic). But from inside it is provably bigger than any
 formal approximation of the whole of math. It is *very* big. Note that
 arithmetical truth is also bigger by itself than we thought before
 Gödel. It is already not axiomatisable. There are no effective
 theories of numberland.

Wouldn't numbers+names land be even bigger?




  Anyway, you are not convincing by pointing on everyday example, when
  talking to a theoretician.

  If the theory doesn't apply to reality, then I have no problem with
  it. Fantasy sports are not my area of interest. It's only if it
  conflicts with my ideas of realism that I would be curious.

 Realism of what?

Of experience.

 If comp is true, it has to apply on reality.

Why? Maybe comp only applies to comp reality. Just because such a
reality can be imposed on some 

Re: Consciousness Easy, Zombies Hard

2012-01-23 Thread Craig Weinberg
Part I...I'll have to get back to this later for Part II

On Jan 21, 4:32 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 Craig,

 I assume comp all along.

Then why say that you are agnostic about comp?

  Why do numbers make machines or tapes? Do the want to? Do they
  have a
  choice?

  As much choice and free will than you have. They too cannot predicts
  themselves and can be confronted to making decision with partial
  information.

  Where do they get this capacity?

  From the laws of addition and multiplication, which makes arithmetic
 already Turing Universal.

Where in addition and multiplication do we find free will?


  Why do we never see it manifested in
  our ordinary use of numbers?

 With the computer and AI enterprise, you can see the embryonic
 development of this.

It's only embryonic if it develops into a fetus. At this point it
appears to be developing into a purely human distribution system for
gossip and porn instead.

 You can also interpret, like Jon Clark did, the DNA as number, coded
 in the chemistry of carbon, so that we can see it all around.
 We don't see it in the usual use of little numbers, because it is not
 there. The relations are either too poor, or not exploited enough.

Don't all relations have to arise ultimately from the usual use of
little numbers?


 Anyway, you are not convincing by pointing on everyday example, when
 talking to a theoretician.

If the theory doesn't apply to reality, then I have no problem with
it. Fantasy sports are not my area of interest. It's only if it
conflicts with my ideas of realism that I would be curious.


  Generally the point of counting is to
  establish a deterministic quantitative relation.. that's sort of what
  counting is? If the numbers themselves made choices, then why should
  we consider counting a reliable epistemology?

 Counting use only the succession laws. The universal mess comes from
 the mixture of addition and multiplication, as the prime numbers
 already illustrates by their logarithmic random distribution.
 Your question is a bit like if criminals are made of chemical
 reactions, should we continue to rely on chemistry?.

Are you saying then choice making is an emergent property of certain
mixed arithmetic modes only and not inherent in numbers then?




  Each universal machine is a particular machine. Even the virgin, non
  programmed one.
  You are a universal machine, at least. (Even if you have a non
  machine
  component).

  Me the person, or me the biography?

 The person is not really a number. But in all its histories/
 computations it acts as a relative numbers, through its body described
 above his substitution level.

  Is my life a machine within which
  I exist as another machine or are we both the same machine?

 Your life is a sequence of machines states, and typically, it is self-
 changing machine. To be more precise would need boring and distracting
 vocabulary issue, the understanding of UDA, etc.
 Your life is not a machine. I have translated to be a machine by the
 more operational to accept a digital brain transplant to study the
 consequences without defining completely what person and life are
 (which can hardly be done).

 Keep in mind that I do not defend mechanism. I just explain that IF
 mechanism is true, then Plato/Plotinus are correct, and Aristotle
 primitive matter, and physicalism are not correct.

My position is that P/P Mechanism and A/pm/p are correct in some
sense, incorrect in some sense, both correct and incorrect in another
sense, and neither correct nor incorrect in another sense. The
invariant universal truth is sense.




  I know that you believe in non-comp.

  Is that supposed to invalidate the observations? Programs do get
  tired? They do catch colds?

  With comp, that is obvious.

  At what point do programs develop the capacity to get tired? Is it a
  matter of complexity or degree of self-reference?

 Yes, like some robot can feel themselves wet, in the sense of finding
 a shelter if it rains.

 With some amount of self-reference they can
 develop qualia, and rememorable qualia, which can help to speed the
 recollection.

I think this is critically flawed. Nothing I know of suggests that
qualia from quantity can develop at all. If that were the case a
person should be able to learn to see visual qualia with other sense
organs. High resolution greyscale images should turn into color. I
have not seen anything that suggests to me that qualia would or could
speed recollection either. To the contrary, it would be an additional
abstraction layer with significant resource overhead. If what you say
were true, computers would not need graphics accelerator cards, rather
they would need accelerator cards if graphics were not available to
speed up computation. I really can't see any credible argument against
this. Qualia serves users, not machines. It is insurmountably
nonsensical and metaphysical. It is to say, it's faster to count to
1000 if the numbers taste like 

Re: Consciousness Easy, Zombies Hard

2012-01-23 Thread Bruno Marchal


On 23 Jan 2012, at 14:01, Craig Weinberg wrote:


Part I...I'll have to get back to this later for Part II

On Jan 21, 4:32 pm, Bruno Marchal marc...@ulb.ac.be wrote:

Craig,

I assume comp all along.


Then why say that you are agnostic about comp?


If I was knowing that comp is true, or if I was a believer in comp, I  
would not have to assume it.
I study the consequence of the comp *hypothesis*. Unlike philosophers  
I never argue for the truth of comp, nor for the falsity of comp. But  
as a logician I can debunk invalid refutation of comp. This does not  
mean that comp is true for me.
During the Iraq war I have invalidated many reasoning against that  
war, but I was not defending it. There were other arguments which were  
valid.
But I realize some people lacked that nuance. Just for this modal  
logic is very useful, because it is the difference between the  
agnostic (~Bg) and the atheist (B~g).
When doing science, it is better to hide our personal beliefs, and to  
abstract from them.







Why do numbers make machines or tapes? Do the want to? Do they
have a
choice?


As much choice and free will than you have. They too cannot  
predicts

themselves and can be confronted to making decision with partial
information.



Where do they get this capacity?


From the laws of addition and multiplication, which makes arithmetic
already Turing Universal.


Where in addition and multiplication do we find free will?


Just addition and multiplication (and some amount of logic, which can  
be made itself very little) appears to be Turing universal. But it is  
a very *low level* programming language, so a proof of the existence  
of a Löbian universal number is *very* long, and not easy at all. But  
it can be done, and free-will, as I defined it, is unavoidable for  
Löbian number. They have the cognitive ability to know that they  
cannot predict themselves and have to take decision using very partial  
information. This is true for all universal machine, but the Löbian  
one are aware of that fact: they know that they have free-will. Of  
course some people defines free-will by a sort of ability of  
disobeying the natural laws, but this makes free-will senseless, as  
John Clark often says.










Why do we never see it manifested in
our ordinary use of numbers?


With the computer and AI enterprise, you can see the embryonic
development of this.


It's only embryonic if it develops into a fetus. At this point it
appears to be developing into a purely human distribution system for
gossip and porn instead.


OK. But that is contingent of humans. I really don't know if  
artificial machine will become intelligent thanks to the willingness  
humans, despite the humans, or thanks to the unwillingness of humans.








You can also interpret, like Jon Clark did, the DNA as number, coded
in the chemistry of carbon, so that we can see it all around.
We don't see it in the usual use of little numbers, because it is not
there. The relations are either too poor, or not exploited enough.


Don't all relations have to arise ultimately from the usual use of
little numbers?


Not really. Everything concerning matter and consciousness comes from  
an interplay between little numbers, and many big numbers. This comes  
from the UDA, which explains that the inside view is somehow a  
projection of the whole arithmetical truth.
This leads to something counter-intuitive, but not contradictory. the  
big picture conceived from outside is not so big (it is the whole of  
just arithmetic). But from inside it is provably bigger than any  
formal approximation of the whole of math. It is *very* big. Note that  
arithmetical truth is also bigger by itself than we thought before  
Gödel. It is already not axiomatisable. There are no effective  
theories of numberland.







Anyway, you are not convincing by pointing on everyday example, when
talking to a theoretician.


If the theory doesn't apply to reality, then I have no problem with
it. Fantasy sports are not my area of interest. It's only if it
conflicts with my ideas of realism that I would be curious.


Realism of what?
If comp is true, it has to apply on reality. That's why UDA makes comp  
a testable hypothesis.
I assume comp, derive consequences which are observable, and so we can  
make test.
It gives also a unification of qualia and quanta, consciousness and  
matter. It might be that even false, it will remain interesting as an  
example of theory. It might help to weaken comp to get the correct  
picture.
To be sure the testable part requires not just comp, but also the  
classical theory of knowledge.









Generally the point of counting is to
establish a deterministic quantitative relation.. that's sort of  
what

counting is? If the numbers themselves made choices, then why should
we consider counting a reliable epistemology?


Counting use only the succession laws. The universal mess comes from
the mixture of addition and multiplication, as the 

Re: Consciousness Easy, Zombies Hard

2012-01-21 Thread Bruno Marchal


On 21 Jan 2012, at 01:31, Craig Weinberg wrote:


On Jan 20, 2:03 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 20 Jan 2012, at 02:34, Craig Weinberg wrote:





What machine makes the infinite tape?


Eventually the numbers themselves. It is simpler than the universal
unitary rotation of the physicist, but if you want an infinite tape,
you need to postulate at least once infinite thing. At the meta- 
level,

or in the epistemology, or in the ontology.


Why do numbers make machines or tapes? Do the want to? Do they have a
choice?


As much choice and free will than you have. They too cannot predicts  
themselves and can be confronted to making decision with partial  
information.











That error comfort me in talking about universal numbers, and
defining
them by the relation



phi_u(x, y) = phi_x(y).u is the universal machine, x is a
program and y is a data. phi refer to some other universal number
made implicit (in my context it is explicited by elementary
arithmetic).


So a universal machine's universal number made implicit from data  
in a

program = a program's universal number from data. I don't understand
what it means.


A number (code, body) transforms itself into a function relatively to
a universal number.
u is a computer. Phi_u is the universal function computed by u. If  
you

a program x and a data y to the computer u, it will simulate x on the
input y, and will output phi_x(y). u does that for all program x, and
so is a universal simulator.


It sounds like you are saying that what makes a machine universal is
if it computes any given program the same way as every other universal
machine. I don't have a problem with that. By that definition though,
it still appears to me that consciousness, being both
idiosyncratically unique to each individual and each moment and
sharable through common sense and experience is the opposite of a
universal and the opposite of a machine.


Each universal machine is a particular machine. Even the virgin, non  
programmed one.
You are a universal machine, at least. (Even if you have a non machine  
component).










It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)



This worth than the notion of primitive matter. It is mystification
of
primitive matter.



It's not an assertion of mysticism, it's just a plain old
generalization of ordinary observations. Programs don't get  
excited or
tired, they don't get sick and die, they don't catch a cold, etc.  
They

share none of the differences which make biology different from
physics.


I know that you believe in non-comp.



Is that supposed to invalidate the observations? Programs do get
tired? They do catch colds?


With comp, that is obvious.
















Do asteroids and planets exist out there even if no one  
perceives

them?



They don't need humans to perceive them to exist, but my view is
that
gravity is evidence that all physical objects perceive each other.
Not
in a biological sense of feeling, seeing, or knowing, but in the
most
primitive forms of collision detection, accumulation, attraction  
to

mass, etc.


I can agree with that. This is in the spirit of Everett, which  
treat

observation as interaction. But there is no reason to associate
primitive qualia and private sensation from that. It lacks the
retrieving memory and self-reference.



Doesn't an asteroid maintain it's identity through it's trajectory?


I can agree with this.


Can't the traces of it's collisions be traced forensically by
examining it.


Yes.


Memory and self reference have to come from somewhere,
why not there?


Because self-reference needs a non trivial programming loop (whose
existence is assured by computer science theorem like Kleene second
recursion theorem).


I know that you believe in comp.


Then you are wrong. I am agnostic on this. As I should be: no correct  
machine believes in comp (nor in non-comp). We just cannot know. That  
is why I insist that we need some act of faith to say yes to the  
doctor. That is why I insist that it is a theology, and that we are  
forced to accept that people thinks differently.






I propose another possibility. Imagine a universe where things can
become what they actually are without running a program. Running a
program supervenes not only on sequential recursion but on a whole
universe of logical consequence, ideas of representation, memory,
continuous temporal execution, etc. What if those things are aspects
of particular experience and not universal primitives?


I don't know what is a universe. That's part of what I want an  
explanation for, that is in term of simple things that I can  
understand, like elementary arithmetic or combinatorics.






What if the
entire cosmos is a monad; a boundless and implicit firmament through
which objects and experiences are diffracted? The primordial 

Re: Consciousness Easy, Zombies Hard

2012-01-21 Thread Craig Weinberg
On Jan 21, 4:38 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 21 Jan 2012, at 01:31, Craig Weinberg wrote:

  What machine makes the infinite tape?

  Eventually the numbers themselves. It is simpler than the universal
  unitary rotation of the physicist, but if you want an infinite tape,
  you need to postulate at least once infinite thing. At the meta-
  level,
  or in the epistemology, or in the ontology.

  Why do numbers make machines or tapes? Do the want to? Do they have a
  choice?

 As much choice and free will than you have. They too cannot predicts
 themselves and can be confronted to making decision with partial
 information.

Where do they get this capacity? Why do we never see it manifested in
our ordinary use of numbers? Generally the point of counting is to
establish a deterministic quantitative relation.. that's sort of what
counting is? If the numbers themselves made choices, then why should
we consider counting a reliable epistemology?




  That error comfort me in talking about universal numbers, and
  defining
  them by the relation

  phi_u(x, y) = phi_x(y).    u is the universal machine, x is a
  program and y is a data. phi refer to some other universal number
  made implicit (in my context it is explicited by elementary
  arithmetic).

  So a universal machine's universal number made implicit from data
  in a
  program = a program's universal number from data. I don't understand
  what it means.

  A number (code, body) transforms itself into a function relatively to
  a universal number.
  u is a computer. Phi_u is the universal function computed by u. If
  you
  a program x and a data y to the computer u, it will simulate x on the
  input y, and will output phi_x(y). u does that for all program x, and
  so is a universal simulator.

  It sounds like you are saying that what makes a machine universal is
  if it computes any given program the same way as every other universal
  machine. I don't have a problem with that. By that definition though,
  it still appears to me that consciousness, being both
  idiosyncratically unique to each individual and each moment and
  sharable through common sense and experience is the opposite of a
  universal and the opposite of a machine.

 Each universal machine is a particular machine. Even the virgin, non
 programmed one.
 You are a universal machine, at least. (Even if you have a non machine
 component).

Me the person, or me the biography? Is my life a machine within which
I exist as another machine or are we both the same machine?




  It's an object oriented syntax that is limited to
  particular kinds of functions, none of which include biological
  awareness (which might make sense since biology is almost entirely
  fluid-solution based.)

  This worth than the notion of primitive matter. It is mystification
  of
  primitive matter.

  It's not an assertion of mysticism, it's just a plain old
  generalization of ordinary observations. Programs don't get
  excited or
  tired, they don't get sick and die, they don't catch a cold, etc.
  They
  share none of the differences which make biology different from
  physics.

  I know that you believe in non-comp.

  Is that supposed to invalidate the observations? Programs do get
  tired? They do catch colds?

 With comp, that is obvious.

At what point do programs develop the capacity to get tired? Is it a
matter of complexity or degree of self-reference?




  Do asteroids and planets exist out there even if no one
  perceives
  them?

  They don't need humans to perceive them to exist, but my view is
  that
  gravity is evidence that all physical objects perceive each other.
  Not
  in a biological sense of feeling, seeing, or knowing, but in the
  most
  primitive forms of collision detection, accumulation, attraction
  to
  mass, etc.

  I can agree with that. This is in the spirit of Everett, which
  treat
  observation as interaction. But there is no reason to associate
  primitive qualia and private sensation from that. It lacks the
  retrieving memory and self-reference.

  Doesn't an asteroid maintain it's identity through it's trajectory?

  I can agree with this.

  Can't the traces of it's collisions be traced forensically by
  examining it.

  Yes.

  Memory and self reference have to come from somewhere,
  why not there?

  Because self-reference needs a non trivial programming loop (whose
  existence is assured by computer science theorem like Kleene second
  recursion theorem).

  I know that you believe in comp.

 Then you are wrong. I am agnostic on this. As I should be: no correct
 machine believes in comp (nor in non-comp). We just cannot know. That
 is why I insist that we need some act of faith to say yes to the
 doctor. That is why I insist that it is a theology, and that we are
 forced to accept that people thinks differently.

The way I've found to get beyond that is through sense. Sense bridges
the gap and connects the dots. It says to us, you cannot know, but
yet, 

Re: Consciousness Easy, Zombies Hard

2012-01-20 Thread Bruno Marchal


On 20 Jan 2012, at 02:34, Craig Weinberg wrote:


On Jan 19, 11:33 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 17 Jan 2012, at 21:20, Craig Weinberg wrote:


My point is that a Turing machine is not even truly universal,
let alone infinite.


A universal Turing machine is, by definition a machine, and machine
are by definition finite.

The infinite tape plays a role of possible extending environment, and
is not part of the universal machine, despite a widespread error
(perhaps due to a pedagogical error of Turing).


What machine makes the infinite tape?


Eventually the numbers themselves. It is simpler than the universal  
unitary rotation of the physicist, but if you want an infinite tape,  
you need to postulate at least once infinite thing. At the meta-level,  
or in the epistemology, or in the ontology.








That error comfort me in talking about universal numbers, and  
defining

them by the relation

phi_u(x, y) = phi_x(y).u is the universal machine, x is a
program and y is a data. phi refer to some other universal number
made implicit (in my context it is explicited by elementary  
arithmetic).




So a universal machine's universal number made implicit from data in a
program = a program's universal number from data. I don't understand
what it means.


A number (code, body) transforms itself into a function relatively to  
a universal number.
u is a computer. Phi_u is the universal function computed by u. If you  
a program x and a data y to the computer u, it will simulate x on the  
input y, and will output phi_x(y). u does that for all program x, and  
so is a universal simulator.







It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)


This worth than the notion of primitive matter. It is mystification  
of

primitive matter.


It's not an assertion of mysticism, it's just a plain old
generalization of ordinary observations. Programs don't get excited or
tired, they don't get sick and die, they don't catch a cold, etc. They
share none of the differences which make biology different from
physics.


I know that you believe in non-comp.






Do asteroids and planets exist out there even if no one perceives
them?


They don't need humans to perceive them to exist, but my view is  
that
gravity is evidence that all physical objects perceive each other.  
Not
in a biological sense of feeling, seeing, or knowing, but in the  
most

primitive forms of collision detection, accumulation, attraction to
mass, etc.


I can agree with that. This is in the spirit of Everett, which treat
observation as interaction. But there is no reason to associate
primitive qualia and private sensation from that. It lacks the
retrieving memory and self-reference.


Doesn't an asteroid maintain it's identity through it's trajectory?


I can agree with this.




Can't the traces of it's collisions be traced forensically by
examining it.


Yes.




Memory and self reference have to come from somewhere,
why not there?


Because self-reference needs a non trivial programming loop (whose  
existence is assured by computer science theorem like Kleene second  
recursion theorem). there are no evidence that such program is at play  
in an asteroid above your substitution level. Below your substitution  
level, the asteroids implement all computations, but this is relevant  
only to your observation, not to the asteroid.






Don't forget, without human consciousness going as a
comparison, we can't assume that the experience of raw matter is
ephemeral like ours is. It may not be memory which is the invention of
biology, but forgetting.


Profound remark, and I agree. But subjective memory is an attribute of  
a subject, and there are no evidence the asteroid is a subject, at  
least related in the sense of having private experiences. It lacks too  
much ability in self-representation, made possible by complex  
cooperation between cells in living systems, and programs in computers.










Machines have no feeling.


What I say three times is true.
What I say three times is true.
What I say three times is true.
(Lewis Carroll, The Hunting of the Snark).


I really don't find it a controversial statement. 
http://thesaurus.com/browse/mechanical

mechanical  [muh-kan-i-kuhl]
Part of Speech: adjective

Definition: done by machine; machinelike

Synonyms:   automated, automatic, cold, cursory, *emotionless*, fixed,
habitual, impersonal, instinctive, involuntary, laborsaving,
*lifeless*, machine-driven, matter-of-fact, monotonous, perfunctory,
programmed, routine, *spiritless*, standardized, stereotyped,
unchanging, **unconscious, unfeeling, unthinking**, useful

Antonyms:   by hand, **conscious, feeling**, manual

This is not evidence that machines are incapable of feeling but it
indicates broad commonsense support for my interpretation. Of course

Re: Consciousness Easy, Zombies Hard

2012-01-20 Thread Evgenii Rudnyi

On 20.01.2012 02:34 Jason Resch said the following:

On Thu, Jan 19, 2012 at 7:33 AM, Craig
Weinbergwhatsons...@gmail.comwrote:


On Jan 19, 4:56 am, Bruno Marchalmarc...@ulb.ac.be  wrote:



Yes. Craig argue that machine cannot thinks by pointing on its
fridge.



Are you afraid to burn coal in your stove out of concern that
the material will sense being burned?


Yes. Craig's theory is a bit frightening with respect of this.
But of course that is not an argument. Craig might accuse you of
wishful thinking.



This is the same thing you accuse me of. I have never said that
coal is more alive than silicon, I don't even say that dead
organisms are more alive than silicon. I only say that to really
act *exactly* like a living thing, you need to feel like a living
thing, and to feel like a living thing you actually be a living
organism, which seems to entail cells made of carbohydrates, amino
acids, and water. Carbon, Hydrogen, Oxygen, and Nitrogen. Not
Silicon or Germanium. Make a computer out of carbs, aminos, and
water and see what happens to your ability to control it as a
Turing machine.




Some have argued that cars are alive.  They evolve, consume, move,
reproduce and so on.  While they are dependent on humans for
reproduction, we too depend on a a very specific environment to
reproduce.  Much like viruses.

http://www.ted.com/talks/lee_cronin_making_matter_come_alive.html

Jason



A nice video. Thanks for a link. Yet, it is unclear to me what is 
evolvable matter. In the lecture, the lector has several times said 
cells compete and indeed he needs a competition to come to evolution. 
However, in my view a cells competes is close to a cell perceives 
and what this exactly means is for me a puzzle. Let us think about this 
along the next series:


A rock – a ballcock in a toilet – an automatic door – a self-driving car 
- a cell.


When competition comes into play? Does a self-driving car already 
competes? Does a ballcock competes? What does it actually mean a cells 
competes?


Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-20 Thread Craig Weinberg
On Jan 20, 2:03 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 20 Jan 2012, at 02:34, Craig Weinberg wrote:


  What machine makes the infinite tape?

 Eventually the numbers themselves. It is simpler than the universal
 unitary rotation of the physicist, but if you want an infinite tape,
 you need to postulate at least once infinite thing. At the meta-level,
 or in the epistemology, or in the ontology.

Why do numbers make machines or tapes? Do the want to? Do they have a
choice?




  That error comfort me in talking about universal numbers, and
  defining
  them by the relation

  phi_u(x, y) = phi_x(y).    u is the universal machine, x is a
  program and y is a data. phi refer to some other universal number
  made implicit (in my context it is explicited by elementary
  arithmetic).

  So a universal machine's universal number made implicit from data in a
  program = a program's universal number from data. I don't understand
  what it means.

 A number (code, body) transforms itself into a function relatively to
 a universal number.
 u is a computer. Phi_u is the universal function computed by u. If you
 a program x and a data y to the computer u, it will simulate x on the
 input y, and will output phi_x(y). u does that for all program x, and
 so is a universal simulator.

It sounds like you are saying that what makes a machine universal is
if it computes any given program the same way as every other universal
machine. I don't have a problem with that. By that definition though,
it still appears to me that consciousness, being both
idiosyncratically unique to each individual and each moment and
sharable through common sense and experience is the opposite of a
universal and the opposite of a machine.




  It's an object oriented syntax that is limited to
  particular kinds of functions, none of which include biological
  awareness (which might make sense since biology is almost entirely
  fluid-solution based.)

  This worth than the notion of primitive matter. It is mystification
  of
  primitive matter.

  It's not an assertion of mysticism, it's just a plain old
  generalization of ordinary observations. Programs don't get excited or
  tired, they don't get sick and die, they don't catch a cold, etc. They
  share none of the differences which make biology different from
  physics.

 I know that you believe in non-comp.


Is that supposed to invalidate the observations? Programs do get
tired? They do catch colds?











  Do asteroids and planets exist out there even if no one perceives
  them?

  They don't need humans to perceive them to exist, but my view is
  that
  gravity is evidence that all physical objects perceive each other.
  Not
  in a biological sense of feeling, seeing, or knowing, but in the
  most
  primitive forms of collision detection, accumulation, attraction to
  mass, etc.

  I can agree with that. This is in the spirit of Everett, which treat
  observation as interaction. But there is no reason to associate
  primitive qualia and private sensation from that. It lacks the
  retrieving memory and self-reference.

  Doesn't an asteroid maintain it's identity through it's trajectory?

 I can agree with this.

  Can't the traces of it's collisions be traced forensically by
  examining it.

 Yes.

  Memory and self reference have to come from somewhere,
  why not there?

 Because self-reference needs a non trivial programming loop (whose
 existence is assured by computer science theorem like Kleene second
 recursion theorem).

I know that you believe in comp.

I propose another possibility. Imagine a universe where things can
become what they actually are without running a program. Running a
program supervenes not only on sequential recursion but on a whole
universe of logical consequence, ideas of representation, memory,
continuous temporal execution, etc. What if those things are aspects
of particular experience and not universal primitives? What if the
entire cosmos is a monad; a boundless and implicit firmament through
which objects and experiences are diffracted? The primordial dynamic
is not mechanism but stillness and stasis, like a spectrum to a prism.
Anchored in that stable unity, matter is the more direct
representation of this singularity (ie the many alchemical references
to 'stone'). The subjective correlate would be silent and dark void as
well as solar fusion and stellar profusion. This is realism. A prism
is not a machine, it is an object which reveals the essential
coherence of visual qualia. Machines are the second tier of
sensemaking. A dedication of what already exists to a specific
function which arises from the consequence of it's existence rather
than as the cause of it.

 there are no evidence that such program is at play
 in an asteroid above your substitution level. Below your substitution
 level, the asteroids implement all computations, but this is relevant
 only to your observation, not to the asteroid.

Assuming comp. I don't.


  Don't forget, 

Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Bruno Marchal


On 19 Jan 2012, at 03:56, Jason Resch wrote:




On Tue, Jan 17, 2012 at 2:20 PM, Craig Weinberg  
whatsons...@gmail.com wrote:

On Jan 17, 12:51 am, Jason Resch jasonre...@gmail.com wrote:
 On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg  
whatsons...@gmail.comwrote:



  That's what I'm saying though. A Turing machine cannot be built in
  liquid, gas, or vacuum. It is a logic of solid objects only. That
  means it's repertoire is not infinite, since it can't simulate a
  Turing machine that is not made of some simulated solidity.

 Well you're asking for something impossible, not something  
impossible to

 simulate, but something that is logically impossible.

We can simulate logical impossibilities graphically though (Escher,
etc). My point is that a Turing machine is not even truly universal,
let alone infinite. It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)

But its not entirely free of solids.  You can build a computer out  
of mostly fluids and solutions too.


I agree. Even with gas in some volume.






 Also, something can be infinite without encompassing everything.   
A line
 can be infinite in length without every point in existence having  
to lie on

 that line.

If that's what you meant though, it's not saying much of anything
about the repertoire. A player piano has an infinite repertoire too.
So what?

A piano cannot tell you how any finite process will evolve over time.


Yes. Craig argue that machine cannot thinks by pointing on its fridge.







   To date, there is nothing we
   (individually or as a race) has accomplished that could not in  
principle
   also be accomplished by an appropriately programed Turing  
machine.


  Even if that were true, no Turing machine has ever known what it  
has

  accomplished,

 Assuming you and I aren't Turing machines.

It would be begging the question otherwise.

All known biological processes are Turing emulable.


Yes.






  so in principle nothing can ever be accomplished by a
  Turing machine independently of our perception.

 Do asteroids and planets exist out there even if no one  
perceives them?


They don't need humans to perceive them to exist, but my view is that
gravity is evidence that all physical objects perceive each other. Not
in a biological sense of feeling, seeing, or knowing, but in the most
primitive forms of collision detection, accumulation, attraction to
mass, etc.

If atoms can perceive gravitational forces, why can't computers  
perceive their inputs?


Indeed.






  What is an
  'accomplishment' in computational terms?

 I don't know.



You can't build it out of uncontrollable living organisms.
There are physical constraints even on what can function as  
a simple

AND gate. It has no existence in a vacuum or a liquid or gas.

Just as basic logic functions are impossible under those  
ordinary
physically disorganized conditions, it may be the case that  
awareness
can only develop by itself under the opposite conditions. It  
needs a
variety of solids, liquids, and gases - very specific ones.  
It's not
Legos. It's alive. This means that consciousness may not be  
a concept
at all - not generalizable in any way. Consciousness is the  
opposite,
it is a specific enactment of particular events and  
materials. A brain
can only show us that a person is a live, but not who that  
person is.
The who cannot be simulated because it is an unrepeatable  
event in the
cosmos. A computer is not a single event. It is parts which  
have been
assembled together. It did not replicate itself from a  
single living

cell.

  You can't make a machine that acts like a person without
  it becoming a person automatically. That clearly is  
ridiculous to

  me.

 What do you think about Strong AI, do you think it is  
possible?


The whole concept is a category error.

   Let me use a more limited example of Strong AI.  Do you think  
there is

  any
   existing or past human profession that an appropriately built  
android
   (which is driven by a computer and a program) could not excel  
at?


  Artist, musician, therapist, actor, talk show host, teacher,
  caregiver, parent, comedian, diplomat, clothing designer,  
director,

  movie critic, author, etc.

 What do you base this on?  What is it about being a machine that  
precludes

 them from fulfilling any of these roles?

Machines have no feeling. These kinds of careers rely on sensitivity
to human feeling and meaning. They require that you care about things
that humans care about. Caring cannot be programmed.

A program model of a psychologist's biology will tell you exactly  
what the psychologist will do and say in any situation.


But Craig might be right. Caring and many things can be Turing  
emulable, yet not programmable. If artificial 

Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Craig Weinberg
On Jan 19, 4:56 am, Bruno Marchal marc...@ulb.ac.be wrote:


 Yes. Craig argue that machine cannot thinks by pointing on its fridge.


  Are you afraid to burn coal in your stove out of concern that the
  material will sense being burned?

 Yes. Craig's theory is a bit frightening with respect of this. But
 of course that is not an argument. Craig might accuse you of wishful
 thinking.


This is the same thing you accuse me of. I have never said that coal
is more alive than silicon, I don't even say that dead organisms are
more alive than silicon. I only say that to really act *exactly* like
a living thing, you need to feel like a living thing, and to feel like
a living thing you actually be a living organism, which seems to
entail cells made of carbohydrates, amino acids, and water. Carbon,
Hydrogen, Oxygen, and Nitrogen. Not Silicon or Germanium. Make a
computer out of carbs, aminos, and water and see what happens to your
ability to control it as a Turing machine.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread John Clark
On Tue, Jan 17, 2012  Craig Weinberg whatsons...@gmail.com wrote:

My point is that a Turing machine is not even truly universal


Anything you can do a Turing Machine can do better, or at least as well.


 let alone infinite.


Your mind is not infinite either.


 It's an object oriented syntax that is limited to particular kinds of
 functions, none of which include biological awareness


You seem to be in the habit of writing declarative sentences that not only
you are unable to prove but you can't even find a single scrap of evidence
that would lead someone to think it might be true.

 which might make sense since biology is almost entirely fluid-solution
 based.


Fluids! You think that makes sense? You think the key to consciousness is
our precious bodily fluids? Here is a short scene from the movie Dr.
Strangelove that has a character who thinks along somewhat similar lines.


http://www.youtube.com/watch?v=N1KvgtEnABY



  my view is that gravity is evidence that all physical objects perceive
 each other.


I don't see what this has to do with consciousness as both computers and
the grey goo in your head are physical objects, but gravity does not
perceive where other objects are only where they were because
gravitational effects only move at the speed of light, if the sun suddenly
disappeared it would be 8 minutes before the Earth noticed and changed its
orbit.

Machines have no feeling.


You seem to be in the habit of writing declarative sentences that not only
you are unable to prove but you can't even find a single scrap of evidence
that would lead someone to think it might be true.


 Caring cannot be programmed.


You seem to be in the habit of writing declarative sentences that not only
you are unable to prove but you can't even find a single scrap of evidence
that would lead someone to think it might be true.


   Artist and Musician: Computer generated music has been around since at
 least the 60s: http://www.youtube.com/watch?v=X4Neivqp2K4;




 Yep, 47 years since then and still no improvement whatsoever.


Oh I'd say there was improvement, the following has 2 tracks of much more
modern computer composed music, I particularly liked track 2, I'd certainly
be proud if I had written it:

http://www.miller-mccune.com/culture/triumph-of-the-cyborg-composer-8507/



 Does anyone use ELIZA for psychology?


Why does anyone even talk about ELIZA anymore when there are programs like
Watson and Sir around?

 No. It's utterly useless


So you admit that even a very old program like ELIZA does a excellent job
emulating psychologists.

 a Turing machine should be executable as a truck load of live hamsters or
 a dense layer of fog.


From an engineering perspective those are probably not ideal materials to
make a computer out of, but then neither is a long paper tape with marks on
it as described in Turing's original 1936 article. What made the article so
important is that Turing proved that if they are organized in the right way
there is no theoretical reason why you could not build a computer out of
ANYTHING.


 A Turing machine can't experience anything by itself


You seem to be in the habit of writing declarative sentences that not only
you are unable to prove but you can't even find a single scrap of evidence
that would lead someone to think it might be true.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Bruno Marchal


On 17 Jan 2012, at 21:20, Craig Weinberg wrote:




My point is that a Turing machine is not even truly universal,
let alone infinite.


A universal Turing machine is, by definition a machine, and machine  
are by definition finite.


The infinite tape plays a role of possible extending environment, and  
is not part of the universal machine, despite a widespread error  
(perhaps due to a pedagogical error of Turing).


That error comfort me in talking about universal numbers, and defining  
them by the relation


phi_u(x, y) = phi_x(y).u is the universal machine, x is a  
program and y is a data. phi refer to some other universal number  
made implicit (in my context it is explicited by elementary arithmetic).






It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)


This worth than the notion of primitive matter. It is mystification of  
primitive matter.








Also, something can be infinite without encompassing everything.  A  
line
can be infinite in length without every point in existence having  
to lie on

that line.


If that's what you meant though, it's not saying much of anything
about the repertoire. A player piano has an infinite repertoire too.
So what?




To date, there is nothing we
(individually or as a race) has accomplished that could not in  
principle

also be accomplished by an appropriately programed Turing machine.



Even if that were true, no Turing machine has ever known what it has
accomplished,


Assuming you and I aren't Turing machines.


It would be begging the question otherwise.




so in principle nothing can ever be accomplished by a
Turing machine independently of our perception.


Do asteroids and planets exist out there even if no one perceives  
them?


They don't need humans to perceive them to exist, but my view is that
gravity is evidence that all physical objects perceive each other. Not
in a biological sense of feeling, seeing, or knowing, but in the most
primitive forms of collision detection, accumulation, attraction to
mass, etc.


I can agree with that. This is in the spirit of Everett, which treat  
observation as interaction. But there is no reason to associate  
primitive qualia and private sensation from that. It lacks the  
retrieving memory and self-reference.









What is an
'accomplishment' in computational terms?


I don't know.




You can't build it out of uncontrollable living organisms.
There are physical constraints even on what can function as a  
simple

AND gate. It has no existence in a vacuum or a liquid or gas.



Just as basic logic functions are impossible under those ordinary
physically disorganized conditions, it may be the case that  
awareness
can only develop by itself under the opposite conditions. It  
needs a
variety of solids, liquids, and gases - very specific ones. It's  
not
Legos. It's alive. This means that consciousness may not be a  
concept
at all - not generalizable in any way. Consciousness is the  
opposite,
it is a specific enactment of particular events and materials. A  
brain
can only show us that a person is a live, but not who that  
person is.
The who cannot be simulated because it is an unrepeatable event  
in the
cosmos. A computer is not a single event. It is parts which have  
been
assembled together. It did not replicate itself from a single  
living

cell.



You can't make a machine that acts like a person without
it becoming a person automatically. That clearly is ridiculous  
to

me.



What do you think about Strong AI, do you think it is possible?



The whole concept is a category error.


Let me use a more limited example of Strong AI.  Do you think  
there is

any
existing or past human profession that an appropriately built  
android

(which is driven by a computer and a program) could not excel at?



Artist, musician, therapist, actor, talk show host, teacher,
caregiver, parent, comedian, diplomat, clothing designer, director,
movie critic, author, etc.


What do you base this on?  What is it about being a machine that  
precludes

them from fulfilling any of these roles?


Machines have no feeling.


What I say three times is true.
What I say three times is true.
What I say three times is true.
(Lewis Carroll, The Hunting of the Snark).



These kinds of careers rely on sensitivity
to human feeling and meaning. They require that you care about things
that humans care about. Caring cannot be programmed. That is the
opposite of caring, because programming requires no investment by the
programmed. There is no subject in a program, only an object
programmed to behave in a way that seems like it could be a subject in
some ways.


If you define the subject by the knower, believability by provability,  
and if you accept the classical theory of knwoledge (the axioms:  Kp- 
p, K(p-q)-(Kp-Kq)). Then it is a theorem that a subject exist 

Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Craig Weinberg
On Jan 19, 11:33 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 17 Jan 2012, at 21:20, Craig Weinberg wrote:

  My point is that a Turing machine is not even truly universal,
  let alone infinite.

 A universal Turing machine is, by definition a machine, and machine
 are by definition finite.

 The infinite tape plays a role of possible extending environment, and
 is not part of the universal machine, despite a widespread error
 (perhaps due to a pedagogical error of Turing).

What machine makes the infinite tape?


 That error comfort me in talking about universal numbers, and defining
 them by the relation

 phi_u(x, y) = phi_x(y).    u is the universal machine, x is a
 program and y is a data. phi refer to some other universal number
 made implicit (in my context it is explicited by elementary arithmetic).


So a universal machine's universal number made implicit from data in a
program = a program's universal number from data. I don't understand
what it means.

  It's an object oriented syntax that is limited to
  particular kinds of functions, none of which include biological
  awareness (which might make sense since biology is almost entirely
  fluid-solution based.)

 This worth than the notion of primitive matter. It is mystification of
 primitive matter.

It's not an assertion of mysticism, it's just a plain old
generalization of ordinary observations. Programs don't get excited or
tired, they don't get sick and die, they don't catch a cold, etc. They
share none of the differences which make biology different from
physics.












  Also, something can be infinite without encompassing everything.  A
  line
  can be infinite in length without every point in existence having
  to lie on
  that line.

  If that's what you meant though, it's not saying much of anything
  about the repertoire. A player piano has an infinite repertoire too.
  So what?

  To date, there is nothing we
  (individually or as a race) has accomplished that could not in
  principle
  also be accomplished by an appropriately programed Turing machine.

  Even if that were true, no Turing machine has ever known what it has
  accomplished,

  Assuming you and I aren't Turing machines.

  It would be begging the question otherwise.

  so in principle nothing can ever be accomplished by a
  Turing machine independently of our perception.

  Do asteroids and planets exist out there even if no one perceives
  them?

  They don't need humans to perceive them to exist, but my view is that
  gravity is evidence that all physical objects perceive each other. Not
  in a biological sense of feeling, seeing, or knowing, but in the most
  primitive forms of collision detection, accumulation, attraction to
  mass, etc.

 I can agree with that. This is in the spirit of Everett, which treat
 observation as interaction. But there is no reason to associate
 primitive qualia and private sensation from that. It lacks the
 retrieving memory and self-reference.

Doesn't an asteroid maintain it's identity through it's trajectory?
Can't the traces of it's collisions be traced forensically by
examining it. Memory and self reference have to come from somewhere,
why not there? Don't forget, without human consciousness going as a
comparison, we can't assume that the experience of raw matter is
ephemeral like ours is. It may not be memory which is the invention of
biology, but forgetting.












  What is an
  'accomplishment' in computational terms?

  I don't know.

  You can't build it out of uncontrollable living organisms.
  There are physical constraints even on what can function as a
  simple
  AND gate. It has no existence in a vacuum or a liquid or gas.

  Just as basic logic functions are impossible under those ordinary
  physically disorganized conditions, it may be the case that
  awareness
  can only develop by itself under the opposite conditions. It
  needs a
  variety of solids, liquids, and gases - very specific ones. It's
  not
  Legos. It's alive. This means that consciousness may not be a
  concept
  at all - not generalizable in any way. Consciousness is the
  opposite,
  it is a specific enactment of particular events and materials. A
  brain
  can only show us that a person is a live, but not who that
  person is.
  The who cannot be simulated because it is an unrepeatable event
  in the
  cosmos. A computer is not a single event. It is parts which have
  been
  assembled together. It did not replicate itself from a single
  living
  cell.

  You can't make a machine that acts like a person without
  it becoming a person automatically. That clearly is ridiculous
  to
  me.

  What do you think about Strong AI, do you think it is possible?

  The whole concept is a category error.

  Let me use a more limited example of Strong AI.  Do you think
  there is
  any
  existing or past human profession that an appropriately built
  android
  (which is driven by a computer and a program) could not excel at?

  Artist, musician, 

Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Jason Resch
On Thu, Jan 19, 2012 at 7:34 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 19, 11:33 am, Bruno Marchal marc...@ulb.ac.be wrote:
  On 17 Jan 2012, at 21:20, Craig Weinberg wrote:
 

 I really don't find it a controversial statement.
 http://thesaurus.com/browse/mechanical

 mechanical [muh-kan-i-kuhl]
 Part of Speech: adjective

 Definition: done by machine; machinelike

 Synonyms:   automated, automatic, cold, cursory, *emotionless*, fixed,
 habitual, impersonal, instinctive, involuntary, laborsaving,
 *lifeless*, machine-driven, matter-of-fact, monotonous, perfunctory,
 programmed, routine, *spiritless*, standardized, stereotyped,
 unchanging, **unconscious, unfeeling, unthinking**, useful

 Antonyms:   by hand, **conscious, feeling**, manual

 This is not evidence that machines are incapable of feeling but it
 indicates broad commonsense support for my interpretation. Of course
 popularity does not mean truth, but it does mean that I don't have to
 accept accusations of some sort of fanciful eccentricity peculiar to
 myself alone. My interpretation is conservative, yours is radically
 experimental and completely unproven. How can you act as if it were
 the other way around? It's dishonest.


Our language is littered with ideas which have long been shown to be
false.  For example, we still say that the sun sets.  The word mechanical
originated with the ancient Greeks.  Would you consider them an authority
on what machines are capable of?

Also, regarding your statement that yours is the majority or conventional
opinion, I disagree.  The most widely held view among those versed in the
subject is that the human body is mechanical (as opposed to governed by
spirits or otherwise non-physical influences).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Craig Weinberg
On Jan 19, 8:34 pm, Jason Resch jasonre...@gmail.com wrote:


 Some have argued that cars are alive.  They evolve, consume, move,
 reproduce and so on.  While they are dependent on humans for reproduction,
 we too depend on a a very specific environment to reproduce.  Much like
 viruses.


Yes, from an alien astronomer's perspective, cars would clearly seem
to be the conscious agents on this planet. Does that show the limits
of observation or the broadness of 'life'? I think that since our own
experience verifies that our interiority is not publicly accessible
except through indirect means, that tilts the scale sharply toward the
probability that observation is limited. We know that cars don't drive
by themselves, but we know that we can drive them when we want to.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-19 Thread Craig Weinberg
On Jan 19, 8:51 pm, Jason Resch jasonre...@gmail.com wrote:
 On Thu, Jan 19, 2012 at 7:34 PM, Craig Weinberg whatsons...@gmail.comwrote:









  On Jan 19, 11:33 am, Bruno Marchal marc...@ulb.ac.be wrote:
   On 17 Jan 2012, at 21:20, Craig Weinberg wrote:

  I really don't find it a controversial statement.
 http://thesaurus.com/browse/mechanical

  mechanical [muh-kan-i-kuhl]
  Part of Speech:         adjective

  Definition:     done by machine; machinelike

  Synonyms:       automated, automatic, cold, cursory, *emotionless*, fixed,
  habitual, impersonal, instinctive, involuntary, laborsaving,
  *lifeless*, machine-driven, matter-of-fact, monotonous, perfunctory,
  programmed, routine, *spiritless*, standardized, stereotyped,
  unchanging, **unconscious, unfeeling, unthinking**, useful

  Antonyms:       by hand, **conscious, feeling**, manual

  This is not evidence that machines are incapable of feeling but it
  indicates broad commonsense support for my interpretation. Of course
  popularity does not mean truth, but it does mean that I don't have to
  accept accusations of some sort of fanciful eccentricity peculiar to
  myself alone. My interpretation is conservative, yours is radically
  experimental and completely unproven. How can you act as if it were
  the other way around? It's dishonest.

 Our language is littered with ideas which have long been shown to be
 false.  For example, we still say that the sun sets.  The word mechanical
 originated with the ancient Greeks.  Would you consider them an authority
 on what machines are capable of?

The Sun does set. It's only if you are not on the surface of the Earth
that the Sun would not set. It is helpful to deprogram ourselves from
occidental prejudice this way. Naive perception is as much a part of
the cosmos as a hypothetical universal voyeur's perspective. Such an
observer is useful for predictions, but it fails when taken literally
since such an observer is impossible in reality (is the Earth tiny or
enormous? instantaneous or eternal?)

It's not about being an authority on what machines are capable of,
it's about recognizing that humans have seen machines in a certain,
remarkably consistent way. My interpretation explores that common
intuition or stereotype - not to take it as a the truth, but as a
possible clue to the truth. It is a piece to the puzzle. If we do not
examine the real pieces to the puzzle, we cannot expect to solve it.


 Also, regarding your statement that yours is the majority or conventional
 opinion, I disagree.  The most widely held view among those versed in the
 subject is that the human body is mechanical (as opposed to governed by
 spirits or otherwise non-physical influences).

Mm, that's not a bad point. I wouldn't say mechanical, but yes most of
us in the developed world do think of our bodies in machine-like
metaphors but I would stop short of saying that we refer to ourselves
literally as machines. We still go to doctors and not mechanics.
Still, in delineating between a living creature and a 'machine' you
are not going to find many people who will honestly say that machines
are warm and friendly but dogs are robotic. It's hard even to play
Devil's advocate on this for me. Robotic has certain meanings,
'animal' means something very different. The two are not easily
confused. The connotations are there for a reason.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-17 Thread Bruno Marchal


On 16 Jan 2012, at 19:50, John Clark wrote:

On Mon, Jan 16, 2012 at 1:08 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


So you believe that the theory according to which consciousness is  
a gift by a creationist God is as bad as the theory according to  
which consciousness is related to brain activity?


If creationists could explain consciousness then I would be a  
creationists, but they can not.


We agree on this.




Brain activity does not explain consciousness either.


But it addresses the problem, and the theory is not bad at all. The  
theory is that brain has some role in consciousness. My point is that  
a theory can be better than another one, even if it does not solve  
completely the riddle.
A theory is not supposed to provide an answer. When a problem is  
complex, we can be glad if a theory can help to formulate the problem.
That's what is nice with computationalism: it transforms the mind-body  
problem into a body problem in arithmetic. It meta-explains also why  
consciousness is felt as not entirely explainable.




I don't know how but I believe as certainly as I believe anything  
that intelligence causes consciousness.


I agree, but we might not use the terms here in the exact same senses.  
Intelligence and consciousness are almost identical notion, for me. I  
use the term competence where you are using intelligence. Competence  
can be partially evaluated. Intelligence cannot.


I have a simple theory of intelligence-in-a-large-sense:

A machine is intelligent if it is not stupid. And
a machine is stupid in two circumstances, when she asserts that she is  
intelligent, or when she asserts that she is stupid.


(it can be shown that a machine which asserts that she is stupid is  
slightly less stupid than the one which asserts its intelligence).  
Those recursive definition of intelligence admits arithmetical  
interpretations. They make sense. But this should not been taken too  
much literally: you might become stupid by doing so!




I believe this not because I can prove it but because I simply could  
not function if I thought I was the only conscious being in the  
universe.


I don't see the relation with what you say above. Someone can believe  
that consciousness is not required for all forms of intelligence, but  
only for more special sort of intelligence.






 the quanta does not exist primitively but emerge, in the comp  
case, from number relations.


What sort of numbers, computable numbers or the far more common non- 
computable numbers? And what sort of relations.?


All the numbers I talk about are natural numbers (and thus computable,  
by the constant programs). The others are represented by functions  
from N to N.
The quanta emerges from the computational relations going through my  
actual computational states. actual is defined by the self- 
referential tools provided in arithmetic/computer science (mainly  
discovered by Gödel 1931). This should follow easily from UDA step1-7.







This means that indeed we can write simple program leading to  
intelligence


I don't know what that simple program could be, but I have already  
given a example of a simple program leading to emotion.


OK. I gave you other samples, too.




My whole point is that intelligence is not a constructive concept,  
like consciousness you cannot define it.


Intelligence is problem solving; not a perfect definition by any  
means but far far better than any known definition of consciousness.


That is not intelligence, but competence. In that case, it can be  
shown that there are no universal problem solver, and that the problem  
solving abilities can be put on a lattice type of partial order. So,  
some machine can be very competent for some (large) class of problems,  
and utterly incompetent for other (large) class of problems.
Case and Smith have shown that if you allow an inductive inference  
machine to say enough bulshit, and to change infinitely often its mind  
on working theories (!), then they give rise to universal problem  
solving strategies. Harrington showed that such machine are  
necessarily only theoretical, yet the principles in play might have a  
role in long computations, like evolution might illustrate.



Examples are better than definitions anyway, intelligence is what  
Einstein did


This will not help a lot.




and consciousness is what I am.


This is too short, and literally untrue, or unclear. Consciousness is  
what many different people are, then, yet consciousness might be  
unique or not (open problem in the mechanist theory).


But defining consciousness by the mental state of any machine having  
self-developed some belief in some reality, can explain why machines  
are puzzled by consciousness, why machine cannot define it, and at the  
same time cannot ignore precisely what it is.  Above all, it explains  
where the quanta comes from, including the laws to which they have to  
obey, making it into a refutable (scientific) theory.

Re: Consciousness Easy, Zombies Hard

2012-01-17 Thread Craig Weinberg
On Jan 17, 12:51 am, Jason Resch jasonre...@gmail.com wrote:
 On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg whatsons...@gmail.comwrote:


  That's what I'm saying though. A Turing machine cannot be built in
  liquid, gas, or vacuum. It is a logic of solid objects only. That
  means it's repertoire is not infinite, since it can't simulate a
  Turing machine that is not made of some simulated solidity.

 Well you're asking for something impossible, not something impossible to
 simulate, but something that is logically impossible.

We can simulate logical impossibilities graphically though (Escher,
etc). My point is that a Turing machine is not even truly universal,
let alone infinite. It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)


 Also, something can be infinite without encompassing everything.  A line
 can be infinite in length without every point in existence having to lie on
 that line.

If that's what you meant though, it's not saying much of anything
about the repertoire. A player piano has an infinite repertoire too.
So what?


   To date, there is nothing we
   (individually or as a race) has accomplished that could not in principle
   also be accomplished by an appropriately programed Turing machine.

  Even if that were true, no Turing machine has ever known what it has
  accomplished,

 Assuming you and I aren't Turing machines.

It would be begging the question otherwise.


  so in principle nothing can ever be accomplished by a
  Turing machine independently of our perception.

 Do asteroids and planets exist out there even if no one perceives them?

They don't need humans to perceive them to exist, but my view is that
gravity is evidence that all physical objects perceive each other. Not
in a biological sense of feeling, seeing, or knowing, but in the most
primitive forms of collision detection, accumulation, attraction to
mass, etc.


  What is an
  'accomplishment' in computational terms?

 I don't know.



You can't build it out of uncontrollable living organisms.
There are physical constraints even on what can function as a simple
AND gate. It has no existence in a vacuum or a liquid or gas.

Just as basic logic functions are impossible under those ordinary
physically disorganized conditions, it may be the case that awareness
can only develop by itself under the opposite conditions. It needs a
variety of solids, liquids, and gases - very specific ones. It's not
Legos. It's alive. This means that consciousness may not be a concept
at all - not generalizable in any way. Consciousness is the opposite,
it is a specific enactment of particular events and materials. A brain
can only show us that a person is a live, but not who that person is.
The who cannot be simulated because it is an unrepeatable event in the
cosmos. A computer is not a single event. It is parts which have been
assembled together. It did not replicate itself from a single living
cell.

  You can't make a machine that acts like a person without
  it becoming a person automatically. That clearly is ridiculous to
  me.

 What do you think about Strong AI, do you think it is possible?

The whole concept is a category error.

   Let me use a more limited example of Strong AI.  Do you think there is
  any
   existing or past human profession that an appropriately built android
   (which is driven by a computer and a program) could not excel at?

  Artist, musician, therapist, actor, talk show host, teacher,
  caregiver, parent, comedian, diplomat, clothing designer, director,
  movie critic, author, etc.

 What do you base this on?  What is it about being a machine that precludes
 them from fulfilling any of these roles?

Machines have no feeling. These kinds of careers rely on sensitivity
to human feeling and meaning. They require that you care about things
that humans care about. Caring cannot be programmed. That is the
opposite of caring, because programming requires no investment by the
programmed. There is no subject in a program, only an object
programmed to behave in a way that seems like it could be a subject in
some ways.


 Also, although their abilities are limited, the below examples certainly
 show that computers are making inroads along many of these lines of work,
 and will only improve overtime as computers become more powerful.

Many professions would be much better performed by a computer. Human
oversight might be desirable for something like surgery, but I would
probably go with the computer over a human surgeon.


 Artist and Musician: Computer generated music has been around since at
 least the 60s:http://www.youtube.com/watch?v=X4Neivqp2K4

Yep, 47 years since then and still no improvement whatsoever. Based on
that I think we cannot assume that computer generated music will

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 16 Jan 2012, at 07:52, Quentin Anciaux wrote:




2012/1/16 Craig Weinberg whatsons...@gmail.com

On Jan 15, 3:07 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/1/14 Craig Weinberg whatsons...@gmail.com

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist, therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that  
it is
  impossible to create something that acts like a person which is  
not a

  person. Zombies are not Turing emulable.

 No, zombies *that are persons in every aspect* are impossible. Not  
only not

 turing emulable... they are absurd.

If you define them that way then the word has no meaning. What is a
person in every aspect that is not at all a person?

The *only thing* a zombie lacks is consciousness... every other  
aspects of a persons, it has it.


That is right. People should not confuse the Hollywood zombie and the  
philosophical zombie which are 3p-identical to human person, but  
lack any 1-p perspective.


Note also that Turing invented his test to avoid the philosophical  
hard issue of consciousness. In a nutshell Turing defines  
consciousness by having an intelligent behavior. The Turing test  
is equivalent with a type of no zombie principle.


It is like saying that if zombie exist, you have to treat them as  
human being, because we cannot know if they are zombie.






The only way the
term has meaning is when it is used to define something that appears
to be a person in every way to an outside observer (and that would
ultimately have to be a human observer) but has no interior
experience. That is not absurd at all, and in fact describes
animation, puppetry, and machine intelligence.

Puppetries, animations do not act like a person. They act like  
puppetries, animations. A philosophical zombie *acts like a person  
but lacks consciousness*.


Exactly.

Bruno







  If we run the zombie argument backwards then, at what substitution
  level of zombiehood does a (completely possible) simulated person
  become an (non-Turing emulable) unconscious puppet? How bad of a
  simulation does it have to be before becoming an impossible  
zombie?


  This to me reveals an absurdity of arithmetic realism. Pinocchio  
the
  boy is possible to simulate mechanically, but Pinocchio the  
puppet is

  impossible.

 You conflate two (mayve more) notions of zombie... the only one  
important
 in the zombie argument is this: something that act like a person  
in
 every aspects*** but nonetheless is not conscious... If it is  
indeed what
 you mean, then could you devise a test that could show that the  
zombie
 indeed lacks consciousness (remember that *by definition* you  
cannot tell

 apart the zombie and a real conscious person).

No, I think that I have a workable and useful notion of zombie. I'm
not sure how the definition you are trying use is meaningful. It seems
like a straw man of the zombie issue. We already know that
subjectivity is private, what we don't know is whether that means that
simulations automatically acquire consciousness or not. The zombie
issue is not to show that we can't imagine a person without
subjectivity and see that as evidence that subjectivity must
inherently arise from function. My point is that it also must mean
that we cannot stop inanimate objects from acquiring consciousness if
they are a sufficiently sophisticated simulation.

Craig

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 15 Jan 2012, at 19:33, John Clark wrote:


 On Sat, Jan 14, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 If computationalism argues that zombies can't exist, therefore  
anything that we cannot distinguish from a conscious person must be  
conscious, that also means that it is impossible to create something  
that acts like a person which is not a person. Zombies are not  
Turing emulable.


Maybe. Zombie behavior is certainly Turing emulable but you are  
asking more than that and there is no way to prove what you want to  
know because it hinges on one important question: how can you tell  
if a zombie is a zombie? Brains are not my favorite meal but I don't  
think dietary preference or even unsightly skin blemishes are a good  
test for consciousness; I believe zombies have little if any  
consciousness because, at least as depicted in the movies, zombies  
act really really dumb. But maybe the film industry is inflicting an  
unfair stereotype on a persecuted minority and there are good hard  
working zombies out there who you don't hear about that write love  
poetry and teach at Harvard, if so then I think those zombies are  
conscious even if I would still find a polite excuse to decline  
their invitation to dinner.


 This to me reveals an absurdity of arithmetic realism. Pinocchio  
the boy is possible to simulate mechanically, but Pinocchio the  
puppet is impossible. Doesn't that strike anyone else as an obvious  
deal breaker?


I find nothing absurd about that and neither did Evolution. The  
parts of our brain that so dramatically separate us from other  
animals, the parts that deal with language and long term planing and  
mathematics took HUNDREDS of times longer to evolve than the parts  
responsible for intense emotion like pleasure, pain, fear, hate,  
jealousy and love. And why do you think it is that in this group and  
elsewhere everybody and their brother is pushing their own General  
Theory of Consciousness  but nobody even attempts a General Theory  
of Intelligence?


There are general theory of learning, like those of Case and Smith,  
Blum, Osherson, etc. But they are necessarily non constructive. They  
are not usable neither for building AI, nor for verifying if something  
is intelligent. It shows that Intelligence (competence) is an  
intrinsic hard subject with many non-comparable degrees of intelligence.
Intelligence is not programmable. It is only self-programmable, and it  
interests nobody, except philosophers and theologians. When machine  
will be intelligent, we will send them in camps or jails. Intelligence  
leads to dissidence. We pretend appreciating intelligence, but we  
invest a lost in preventing it, in both children and machine.







The reason is that theorizing about the one is easy but  theorizing  
about the other is hard, hellishly hard, and because when  
intelligence theories fail they fail with a loud thud that is  
obvious to all, but one consciousness theory works as well, or as  
badly, as any other.


See the work of Case and Smith. It is not well know because it is  
based on theoretical computer science (recursion theory) which is not  
well known. Those are definite interesting result there, even if not  
applicable. The non-union theorem of Blum shows that there is  
something uncomputably much more intelligent than a machine: a couple  
of machine. The theory is super-non-linear.




Consciousness theories are easy because there are no facts they need  
to explain,


What? With comp, not only you have to explain the qualia, but it has  
been proved that you have to explain the quanta as well, and this  
without assuming a physical reality.




but there is an astronomical number of things that need to be  
explained to understand how intelligence works.


Not really. It is just that intelligent things organize themselves in  
non predictable way, at all. The basic are simple (addition and  
multiplication) but the consequences are not boundable.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 5:39 AM, Bruno Marchal marc...@ulb.ac.be wrote:

   Consciousness theories are easy because there are no facts they need to
 explain



 What? With comp, not only you have to explain the qualia


With ANY theory of consciousness you have to explain qualia, and every
consciousness theory does as well or as badly as any other in doing that.


  but it has been proved that you have to explain the quanta as well,


I don't know what that means.


  and this without assuming a physical reality.


 But I do know that assuming reality does not seem to be a totally
outrageous assumption.

  but there is an astronomical number of things that need to be explained
 to understand how intelligence works.


 Not really. It is just that [...]


If you know how intelligence works you can make a super intelligent
computer right now and you're well on your way to becoming a trillionaire.
It seems to me that when discussing this very complex subject people use
the phrase it's just a bit too much.

intelligent things organize themselves in non predictable way, at all. The
 basic are simple (addition and multiplication)


That's like saying I know how to cure cancer, it's basically simple, just
arrange the atoms in cancer cells so that they are no longer cancerous.
It's easy to learn the fundamentals of Chess, the rules of the game, but
that does not mean you understand all the complexities and subtleties of it
and are now a grandmaster.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.comwrote:

  I think that I have a workable and useful notion of zombie.



Then I would very much like to hear what it is. What really grabbed my
attention is that you said it was  workable and useful, so whatever
notion you have it can't include things like zombies are conscious but or
zombies are NOT conscious but because I have no way to directly test for
consciousness so such a notion would not be workable or useful to me.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 11:23 am, John Clark johnkcl...@gmail.com wrote:
 On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.comwrote:

   I think that I have a workable and useful notion of zombie.



 Then I would very much like to hear what it is. What really grabbed my
 attention is that you said it was  workable and useful, so whatever
 notion you have it can't include things like zombies are conscious but or
 zombies are NOT conscious but because I have no way to directly test for
 consciousness so such a notion would not be workable or useful to me.

Zombie describes something which seems like it could be conscious from
the outside (ie to a human observer) but actually is not.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
Craig,

Do you have an opinion regarding the possibility of Strong AI, and the
other questions I posed in my earlier post?

Thanks,

Jason

On Mon, Jan 16, 2012 at 10:50 AM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 11:23 am, John Clark johnkcl...@gmail.com wrote:
  On Sun, Jan 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
 
I think that I have a workable and useful notion of zombie.
 
 
 
  Then I would very much like to hear what it is. What really grabbed my
  attention is that you said it was  workable and useful, so whatever
  notion you have it can't include things like zombies are conscious but
 or
  zombies are NOT conscious but because I have no way to directly test
 for
  consciousness so such a notion would not be workable or useful to me.

 Zombie describes something which seems like it could be conscious from
 the outside (ie to a human observer) but actually is not.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Bruno Marchal


On 16 Jan 2012, at 17:08, John Clark wrote:

On Mon, Jan 16, 2012 at 5:39 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


   Consciousness theories are easy because there are no facts they  
need to explain



 What? With comp, not only you have to explain the qualia

With ANY theory of consciousness you have to explain qualia,


Correct.




and every consciousness theory does as well or as badly as any other  
in doing that.



So you believe that the theory according to which consciousness is a  
gift by a creationist God is as bad as the theory according to which  
consciousness is related to brain activity?






 but it has been proved that you have to explain the quanta as well,

I don't know what that means.


It means that
1) the quanta does not exist primitively but emerge, in the comp case,  
from number relations.
2) that physicalism is false, and that you have to derive the physical  
laws from those number relations. More exactly you have to derive the  
beliefs in the physical laws from those number relations.






 and this without assuming a physical reality.

 But I do know that assuming reality does not seem to be a totally  
outrageous assumption.


Sure. But I was talking on the assumption of a primitively physical  
reality. That is shown, by the UD Argument, not be working when we  
assume that we are digitalisable machine. It is not outrageous, it is  
useless, non sensical, wrong with the usual Occam razor, in the same  
sense that it is wrong that invisible horse pulling cars would be the  
real reason why car moves.





 but there is an astronomical number of things that need to be  
explained to understand how intelligence works.


Not really. It is just that [...]

If you know how intelligence works you can make a super intelligent  
computer right now and you're well on your way to becoming a  
trillionaire. It seems to me that when discussing this very complex  
subject people use the phrase it's just a bit too much.


You seem quite unfair. I was saying, in completo: It is just that  
intelligent things organize themselves in non predictable way, at  
all. The it just was not effective, and that was my point!


This means that indeed we can write simple program leading to  
intelligence, but I can hardly be trillionnaire with that because they  
might need incompressible long time to show intelligence. Better to  
use nature's trick to copy from what has already been done. My whole  
point is that intelligence is not a constructive concept, like  
consciousness you cannot define it. You can define competence, and  
competence leads  already itself to many non constructive notions and  
comparisons.  The details are tricky and there is a very large  
litterature in theoretical artificial intelligence and learning  
theories.


Simple programs leading to intelligence are grow, diverse, and  
multiply as much as possible in big but finite environment. or help  
yourself, etc.  The UD can also be see as a little programs leading  
to the advent of intelligence (assuming mechanism), but not in a  
necessarily tractable way.


We discuss in a context where the goal is not to do artificial  
intelligence engineering, but the goal is to find a theory of  
everything, including persons, consciousness, etc.







intelligent things organize themselves in non predictable way, at  
all. The basic are simple (addition and multiplication)


That's like saying I know how to cure cancer, it's basically simple,  
just arrange the atoms in cancer cells so that they are no longer  
cancerous.  It's easy to learn the fundamentals of Chess, the rules  
of the game, but that does not mean you understand all the  
complexities and subtleties of it and are now a grandmaster.


OK. That was  my point. I never pretended to even know what  
intelligence really is. You should not mock the trivial points I make,  
because they are used in a non completely trivial way to show that the  
assumption of mechanism makes physics a branch of number theory (which  
is a key point in the search of a theory of everything). A reasoning  
made clear = a succession of trivial points.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 11:50 AM, Craig Weinberg whatsons...@gmail.comwrote:

  I think that I have a workable and useful notion of zombie. [...]
 Zombie describes something which seems like it could be conscious from the
 outside (ie to a human observer) but actually is not.


As I have absolutely no way of directly determining if a zombie is actually
conscious or actually is not then despite your claim your notion is
neither workable or useful.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

Hi,

My $.02. I am reminded of the argument in Matrix Philosophy that if 
we cannot argue that our experiences are *not* simulations then we might 
as well bet that they are. While I have found that there are upper 
bounds on computational based content via logical arguments such as 
David Deutsch'sCANTGOTO 
http://en.wikipedia.org/wiki/The_Fabric_of_Reality and Carlton Caves 
http://arxiv.org/abs/quant-ph/0304083' research on computational 
resources, it seems to me that we have sufficient evidence to argue that 
if it is possible for a being to have 1p associated with it, then we 
might as well bet that they do. So I am betting that zombies do not 
exist. One simply cannot remain an agnostic on this issue.


Onward!

Stephen


On 1/16/2012 1:27 PM, John Clark wrote:
On Mon, Jan 16, 2012 at 11:50 AM, Craig Weinberg 
whatsons...@gmail.com mailto:whatsons...@gmail.com wrote:


  I think that I have a workable and useful notion of zombie.
[...]  Zombie describes something which seems like it could be
conscious from the outside (ie to a human observer) but actually
is not.


As I have absolutely no way of directly determining if a zombie is 
actually conscious or actually is not then despite your claim your 
notion is neither workable or useful.


 John K Clark


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread John Clark
On Mon, Jan 16, 2012 at 1:08 PM, Bruno Marchal marc...@ulb.ac.be wrote:

So you believe that the theory according to which consciousness is a gift
 by a creationist God is as bad as the theory according to which
 consciousness is related to brain activity?


If creationists could explain consciousness then I would be a creationists,
but they can not. Brain activity does not explain consciousness either. I
don't know how but I believe as certainly as I believe anything that
intelligence causes consciousness. I believe this not because I can prove
it but because I simply could not function if I thought I was the only
conscious being in the universe.

 the quanta does not exist primitively but emerge, in the comp case, from
 number relations.


What sort of numbers, computable numbers or the far more common
non-computable numbers? And what sort of relations.?



 This means that indeed we can write simple program leading to
 intelligence


I don't know what that simple program could be, but I have already given a
example of a simple program leading to emotion.


 My whole point is that intelligence is not a constructive concept, like
 consciousness you cannot define it.


Intelligence is problem solving; not a perfect definition by any means but
far far better than any known definition of consciousness.  Examples are
better than definitions anyway, intelligence is what Einstein did and
consciousness is what I am.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
 Craig,

 Do you have an opinion regarding the possibility of Strong AI, and the
 other questions I posed in my earlier post?


Sorry Jason, I didn't see your comment earlier.

On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.comwrote:
  wrote:

Thought I'd throw this out there. If computationalism argues that
zombies can't exist,

   I think the two ideas zombies are impossible and computationalism are
   independent.  Where you might say they are related is that a disbelief in
   zombies yields a strong argument for computationalism.

  I don't think that it's possible to say that any two ideas 'are'
  independent from each other.

 Okay.  Perhaps 'independent' was not an ideal term, but computationalism is
 at least not dependent on an argument against zombies, as far as I am aware.

What computationlism does depend on though is the same view of
consciousness that zombies would disqualify.


  All ideas can be related through semantic
  association, however distant. As far as your point though, of course I
  see the opposite relation - while admitting even the possibility of
  zombies suggests computationalism is founded on illusion., but a
  disbelief in zombies gives no more support for computationalism than
  it does for materialism or panpsychism.

 If one accepts that zombies are impossible, then to reject computationalism
 requires also rejecting the possibility of Strong AI 
 (https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

What I'm saying is that if one accepts that zombies are impossible,
then to accept computationalism requires accepting that *all* AI is
strong already.












therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.

   I think there is a subtle difference in meaning between it is impossible
   to create something that acts like a person which is not a person and
   saying Zombies are not Turing emulable.  It is important to remember
  that
   the non-possibility of zombies doesn't imply a particular person or thing
   cannot be emulated, rather it means there is a particular consequence of
   certain Turing emulations which is unavoidable, namely the
   consciousness/mind/person.

  That's true, in the sense that emulable can only refer to a specific
  natural and real process being emulated rather than a fictional one.
  You have a valid point that the word emulable isn't the best term, but
  it's a red herring since the point I was making is that it would not
  be possible to avoid creating sentience in any sufficiently
  sophisticated cartoon, sculpture, or graphic representation of a
  person. Call it emulation, simulation, synthesis, whatever, the result
  is the same.

 I think you and I have different mental models for what is entailed by
 emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
 projections, and so on, don't necessarily compute anything (or at least,
 what they might depict as being computed can have little or no relation to
 what is actually computed by said cartoon, sculpture, recording,
 projection...  For actual computation you need counterfactuals conditions.
 A cartoon depicting an AND gate is not required to behave as a genuine AND
 gate would, and flashing a few frames depicting what such an AND gate might
 do is not equivalent to the logical decision of an AND gate.

I understand what you think I mean, but you're strawmanning my point.
An AND gate is a generalizable concept. We know that. It's logic can
be enacted in many (but not every) different physical forms. If we
built the Lego AND mechanism seen here: 
http://goldfish.ikaruga.co.uk/andnor.html#
and attached each side to a an effector which plays a cartoon of a
semiconductor AND gate, then you would have a cartoon which is
simulates an AND gate. The cartoon would be two separate cartoons in
reality, and the logic between them would be entirely inferred by the
audience, but this apparatus could be interpreted by the audience as a
functional simulation. The audience can jump to the conclusion that
the cartoon is a semiconductor AND gate. This is all that Strong AI
will ever be.

Computationalism assumes that consciousness is a generalizable
concept, but we don't know that is true. My view is that it is not
true, since we know that computation itself is not even generalizable
to all physical forms. You can't build a computer without any solid
materials. You can't build it out of uncontrollable living organisms.
There are physical constraints even on what can function as a simple
AND gate. It has no existence in a vacuum or a liquid or gas.

Just as basic logic functions are impossible under those ordinary
physically disorganized 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 1:42 pm, Stephen P. King stephe...@charter.net wrote:
 Hi,

      My $.02. I am reminded of the argument in Matrix Philosophy that if
 we cannot argue that our experiences are *not* simulations then we might
 as well bet that they are. While I have found that there are upper
 bounds on computational based content via logical arguments such as
 David Deutsch'sCANTGOTO
 http://en.wikipedia.org/wiki/The_Fabric_of_Reality and Carlton Caves
 http://arxiv.org/abs/quant-ph/0304083' research on computational
 resources, it seems to me that we have sufficient evidence to argue that
 if it is possible for a being to have 1p associated with it, then we
 might as well bet that they do. So I am betting that zombies do not
 exist. One simply cannot remain an agnostic on this issue.

 Onward!

 Stephen


I think the problem is that the zombie has the 1p of whatever is doing
the computation, not of the living cells and organs of a living
person. I think everything has a 1p experience, it's just that human
1p is a lot different from the 1p of out zoological, biological,
chemical, and physical subselves. A zombie talks the talk, but it
doesn't walk the walk. It's just a puppet which walks some other walk
which it has no awareness of.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

On 1/16/2012 2:02 PM, Craig Weinberg wrote:

On Jan 16, 1:42 pm, Stephen P. Kingstephe...@charter.net  wrote:

Hi,

  My $.02. I am reminded of the argument in Matrix Philosophy that if
we cannot argue that our experiences are *not* simulations then we might
as well bet that they are. While I have found that there are upper
bounds on computational based content via logical arguments such as
David Deutsch'sCANTGOTO
http://en.wikipedia.org/wiki/The_Fabric_of_Reality  and Carlton Caves
http://arxiv.org/abs/quant-ph/0304083' research on computational
resources, it seems to me that we have sufficient evidence to argue that
if it is possible for a being to have 1p associated with it, then we
might as well bet that they do. So I am betting that zombies do not
exist. One simply cannot remain an agnostic on this issue.

Onward!

Stephen


I think the problem is that the zombie has the 1p of whatever is doing
the computation, not of the living cells and organs of a living
person. I think everything has a 1p experience, it's just that human
1p is a lot different from the 1p of out zoological, biological,
chemical, and physical subselves. A zombie talks the talk, but it
doesn't walk the walk. It's just a puppet which walks some other walk
which it has no awareness of.

Craig


Hi Craig,

The 1p is something that can have differences in degree not in kind 
thus your argument is a bit off. Zombies simply do not exist.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 2:22 pm, Stephen P. King stephe...@charter.net wrote:

 Hi Craig,

      The 1p is something that can have differences in degree not in kind
 thus your argument is a bit off. Zombies simply do not exist.

The degree of 1p is always qualitative though, that's how it's
different from 3p. This text is a zombie of my thoughts and
intentions. You see my meaning in it, but it has no meaning by itself.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Stephen P. King

Hi Craig,

On that we agree.

Onward!

Stephen


On 1/16/2012 3:33 PM, Craig Weinberg wrote:

On Jan 16, 2:22 pm, Stephen P. Kingstephe...@charter.net  wrote:


Hi Craig,

  The 1p is something that can have differences in degree not in kind
thus your argument is a bit off. Zombies simply do not exist.

The degree of 1p is always qualitative though, that's how it's
different from 3p. This text is a zombie of my thoughts and
intentions. You see my meaning in it, but it has no meaning by itself.

Craig



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
  Craig,
 
  Do you have an opinion regarding the possibility of Strong AI, and the
  other questions I posed in my earlier post?
 

 Sorry Jason, I didn't see your comment earlier.

 On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
  On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
   wrote:
 
 Thought I'd throw this out there. If computationalism argues that
 zombies can't exist,
 
I think the two ideas zombies are impossible and computationalism
 are
independent.  Where you might say they are related is that a
 disbelief in
zombies yields a strong argument for computationalism.
 
   I don't think that it's possible to say that any two ideas 'are'
   independent from each other.
 
  Okay.  Perhaps 'independent' was not an ideal term, but computationalism
 is
  at least not dependent on an argument against zombies, as far as I am
 aware.

 What computationlism does depend on though is the same view of
 consciousness that zombies would disqualify.

 
   All ideas can be related through semantic
   association, however distant. As far as your point though, of course I
   see the opposite relation - while admitting even the possibility of
   zombies suggests computationalism is founded on illusion., but a
   disbelief in zombies gives no more support for computationalism than
   it does for materialism or panpsychism.
 
  If one accepts that zombies are impossible, then to reject
 computationalism
  requires also rejecting the possibility of Strong AI (
 https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

 What I'm saying is that if one accepts that zombies are impossible,
 then to accept computationalism requires accepting that *all* AI is
 strong already.


Strong AI is an AI capable of any task that a human is capable of.  I am
not aware of any AI that fits this definition.



 
 
 
 
 
 
 
 
 
 
 
 therefore anything that we cannot distinguish
 from a conscious person must be conscious, that also means that it
 is
 impossible to create something that acts like a person which is
 not a
 person. Zombies are not Turing emulable.
 
I think there is a subtle difference in meaning between it is
 impossible
to create something that acts like a person which is not a person
 and
saying Zombies are not Turing emulable.  It is important to
 remember
   that
the non-possibility of zombies doesn't imply a particular person or
 thing
cannot be emulated, rather it means there is a particular
 consequence of
certain Turing emulations which is unavoidable, namely the
consciousness/mind/person.
 
   That's true, in the sense that emulable can only refer to a specific
   natural and real process being emulated rather than a fictional one.
   You have a valid point that the word emulable isn't the best term, but
   it's a red herring since the point I was making is that it would not
   be possible to avoid creating sentience in any sufficiently
   sophisticated cartoon, sculpture, or graphic representation of a
   person. Call it emulation, simulation, synthesis, whatever, the result
   is the same.
 
  I think you and I have different mental models for what is entailed by
  emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
  projections, and so on, don't necessarily compute anything (or at least,
  what they might depict as being computed can have little or no relation
 to
  what is actually computed by said cartoon, sculpture, recording,
  projection...  For actual computation you need counterfactuals
 conditions.
  A cartoon depicting an AND gate is not required to behave as a genuine
 AND
  gate would, and flashing a few frames depicting what such an AND gate
 might
  do is not equivalent to the logical decision of an AND gate.

 I understand what you think I mean, but you're strawmanning my point.
 An AND gate is a generalizable concept. We know that. It's logic can
 be enacted in many (but not every) different physical forms. If we
 built the Lego AND mechanism seen here:
 http://goldfish.ikaruga.co.uk/andnor.html#


This page did not load for me..


 and attached each side to a an effector which plays a cartoon of a
 semiconductor AND gate, then you would have a cartoon which is
 simulates an AND gate. The cartoon would be two separate cartoons in
 reality, and the logic between them would be entirely inferred by the
 audience, but this apparatus could be interpreted by the audience as a
 functional simulation. The audience can jump to the conclusion that
 the cartoon is a semiconductor AND gate. This is all that Strong AI
 will ever be.

 Computationalism assumes that consciousness is a generalizable
 concept, but we don't know that is true. My view is that it is not
 true, since we know that computation itself is not even generalizable
 to all 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Craig Weinberg
On Jan 16, 10:26 pm, Jason Resch jasonre...@gmail.com wrote:
 On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.comwrote:

  On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
   Craig,

   Do you have an opinion regarding the possibility of Strong AI, and the
   other questions I posed in my earlier post?

  Sorry Jason, I didn't see your comment earlier.

  On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
   On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.com
  wrote:
wrote:

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist,

 I think the two ideas zombies are impossible and computationalism
  are
 independent.  Where you might say they are related is that a
  disbelief in
 zombies yields a strong argument for computationalism.

I don't think that it's possible to say that any two ideas 'are'
independent from each other.

   Okay.  Perhaps 'independent' was not an ideal term, but computationalism
  is
   at least not dependent on an argument against zombies, as far as I am
  aware.

  What computationlism does depend on though is the same view of
  consciousness that zombies would disqualify.

All ideas can be related through semantic
association, however distant. As far as your point though, of course I
see the opposite relation - while admitting even the possibility of
zombies suggests computationalism is founded on illusion., but a
disbelief in zombies gives no more support for computationalism than
it does for materialism or panpsychism.

   If one accepts that zombies are impossible, then to reject
  computationalism
   requires also rejecting the possibility of Strong AI (
 https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).

  What I'm saying is that if one accepts that zombies are impossible,
  then to accept computationalism requires accepting that *all* AI is
  strong already.

 Strong AI is an AI capable of any task that a human is capable of.  I am
 not aware of any AI that fits this definition.

What I'm saying though is that computationalism implies that whatever
task is being done by AI that a human can also do is the same. If AI
can print the letters 'y-e-s', then it must be no different from
person answering yes. What I'm saying is that makes all AI strong,
just incomplete.




  therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that it
  is
  impossible to create something that acts like a person which is
  not a
  person. Zombies are not Turing emulable.

 I think there is a subtle difference in meaning between it is
  impossible
 to create something that acts like a person which is not a person
  and
 saying Zombies are not Turing emulable.  It is important to
  remember
that
 the non-possibility of zombies doesn't imply a particular person or
  thing
 cannot be emulated, rather it means there is a particular
  consequence of
 certain Turing emulations which is unavoidable, namely the
 consciousness/mind/person.

That's true, in the sense that emulable can only refer to a specific
natural and real process being emulated rather than a fictional one.
You have a valid point that the word emulable isn't the best term, but
it's a red herring since the point I was making is that it would not
be possible to avoid creating sentience in any sufficiently
sophisticated cartoon, sculpture, or graphic representation of a
person. Call it emulation, simulation, synthesis, whatever, the result
is the same.

   I think you and I have different mental models for what is entailed by
   emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
   projections, and so on, don't necessarily compute anything (or at least,
   what they might depict as being computed can have little or no relation
  to
   what is actually computed by said cartoon, sculpture, recording,
   projection...  For actual computation you need counterfactuals
  conditions.
   A cartoon depicting an AND gate is not required to behave as a genuine
  AND
   gate would, and flashing a few frames depicting what such an AND gate
  might
   do is not equivalent to the logical decision of an AND gate.

  I understand what you think I mean, but you're strawmanning my point.
  An AND gate is a generalizable concept. We know that. It's logic can
  be enacted in many (but not every) different physical forms. If we
  built the Lego AND mechanism seen here:
 http://goldfish.ikaruga.co.uk/andnor.html#

 This page did not load for me..

Weird. Can you see a pic from it? 
http://goldfish.ikaruga.co.uk/legopics/newand11.jpg


  and attached each side to a an effector which plays a cartoon of a
  semiconductor AND gate, then you would have a cartoon which is
  simulates an AND gate. The cartoon would be two separate cartoons in
  reality, and the 

Re: Consciousness Easy, Zombies Hard

2012-01-16 Thread Jason Resch
On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 16, 10:26 pm, Jason Resch jasonre...@gmail.com wrote:
  On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
 
   On Jan 16, 12:15 pm, Jason Resch jasonre...@gmail.com wrote:
Craig,
 
Do you have an opinion regarding the possibility of Strong AI, and
 the
other questions I posed in my earlier post?
 
   Sorry Jason, I didn't see your comment earlier.
 
   On Jan 15, 2:45 am, Jason Resch jasonre...@gmail.com wrote:
On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg 
 whatsons...@gmail.com
   wrote:
 wrote:
 
   Thought I'd throw this out there. If computationalism argues
 that
   zombies can't exist,
 
  I think the two ideas zombies are impossible and
 computationalism
   are
  independent.  Where you might say they are related is that a
   disbelief in
  zombies yields a strong argument for computationalism.
 
 I don't think that it's possible to say that any two ideas 'are'
 independent from each other.
 
Okay.  Perhaps 'independent' was not an ideal term, but
 computationalism
   is
at least not dependent on an argument against zombies, as far as I am
   aware.
 
   What computationlism does depend on though is the same view of
   consciousness that zombies would disqualify.
 
 All ideas can be related through semantic
 association, however distant. As far as your point though, of
 course I
 see the opposite relation - while admitting even the possibility of
 zombies suggests computationalism is founded on illusion., but a
 disbelief in zombies gives no more support for computationalism
 than
 it does for materialism or panpsychism.
 
If one accepts that zombies are impossible, then to reject
   computationalism
requires also rejecting the possibility of Strong AI (
  https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).
 
   What I'm saying is that if one accepts that zombies are impossible,
   then to accept computationalism requires accepting that *all* AI is
   strong already.
 
  Strong AI is an AI capable of any task that a human is capable of.  I am
  not aware of any AI that fits this definition.

 What I'm saying though is that computationalism implies that whatever
 task is being done by AI that a human can also do is the same. If AI
 can print the letters 'y-e-s', then it must be no different from
 person answering yes. What I'm saying is that makes all AI strong,
 just incomplete.

 
 
 
   therefore anything that we cannot distinguish
   from a conscious person must be conscious, that also means
 that it
   is
   impossible to create something that acts like a person which is
   not a
   person. Zombies are not Turing emulable.
 
  I think there is a subtle difference in meaning between it is
   impossible
  to create something that acts like a person which is not a
 person
   and
  saying Zombies are not Turing emulable.  It is important to
   remember
 that
  the non-possibility of zombies doesn't imply a particular person
 or
   thing
  cannot be emulated, rather it means there is a particular
   consequence of
  certain Turing emulations which is unavoidable, namely the
  consciousness/mind/person.
 
 That's true, in the sense that emulable can only refer to a
 specific
 natural and real process being emulated rather than a fictional
 one.
 You have a valid point that the word emulable isn't the best term,
 but
 it's a red herring since the point I was making is that it would
 not
 be possible to avoid creating sentience in any sufficiently
 sophisticated cartoon, sculpture, or graphic representation of a
 person. Call it emulation, simulation, synthesis, whatever, the
 result
 is the same.
 
I think you and I have different mental models for what is entailed
 by
emulation, simulation, synthesis.  Cartoons, sculptures,
 recordings,
projections, and so on, don't necessarily compute anything (or at
 least,
what they might depict as being computed can have little or no
 relation
   to
what is actually computed by said cartoon, sculpture, recording,
projection...  For actual computation you need counterfactuals
   conditions.
A cartoon depicting an AND gate is not required to behave as a
 genuine
   AND
gate would, and flashing a few frames depicting what such an AND gate
   might
do is not equivalent to the logical decision of an AND gate.
 
   I understand what you think I mean, but you're strawmanning my point.
   An AND gate is a generalizable concept. We know that. It's logic can
   be enacted in many (but not every) different physical forms. If we
   built the Lego AND mechanism seen here:
  http://goldfish.ikaruga.co.uk/andnor.html#
 
  This page did not load for me..

 Weird. Can you see a pic from it?
 http://goldfish.ikaruga.co.uk/legopics/newand11.jpg


Weird.  I was 

Re: Consciousness Easy, Zombies Hard

2012-01-15 Thread John Clark
 On Sat, Jan 14, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 If computationalism argues that zombies can't exist, therefore anything
 that we cannot distinguish from a conscious person must be conscious, that
 also means that it is impossible to create something that acts like a
 person which is not a person. Zombies are not Turing emulable.


Maybe. Zombie behavior is certainly Turing emulable but you are asking more
than that and there is no way to prove what you want to know because it
hinges on one important question: how can you tell if a zombie is a zombie?
Brains are not my favorite meal but I don't think dietary preference or
even unsightly skin blemishes are a good test for consciousness; I believe
zombies have little if any consciousness because, at least as depicted in
the movies, zombies act really really dumb. But maybe the film industry is
inflicting an unfair stereotype on a persecuted minority and there are good
hard working zombies out there who you don't hear about that write love
poetry and teach at Harvard, if so then I think those zombies are conscious
even if I would still find a polite excuse to decline their invitation to
dinner.


  This to me reveals an absurdity of arithmetic realism. Pinocchio the boy
 is possible to simulate mechanically, but Pinocchio the puppet is
 impossible. Doesn't that strike anyone else as an obvious deal breaker?


I find nothing absurd about that and neither did Evolution. The parts of
our brain that so dramatically separate us from other animals, the parts
that deal with language and long term planing and mathematics took HUNDREDS
of times longer to evolve than the parts responsible for intense emotion
like pleasure, pain, fear, hate, jealousy and love. And why do you think it
is that in this group and elsewhere everybody and their brother is pushing
their own General Theory of Consciousness  but nobody even attempts a
General Theory of Intelligence?  The reason is that theorizing about the
one is easy but  theorizing about the other is hard, hellishly hard, and
because when intelligence theories fail they fail with a loud thud that is
obvious to all, but one consciousness theory works as well, or as badly, as
any other. Consciousness theories are easy because there are no facts they
need to explain, but there is an astronomical number of things that need to
be explained to understand how intelligence works.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-15 Thread Quentin Anciaux
2012/1/14 Craig Weinberg whatsons...@gmail.com

 Thought I'd throw this out there. If computationalism argues that
 zombies can't exist, therefore anything that we cannot distinguish
 from a conscious person must be conscious, that also means that it is
 impossible to create something that acts like a person which is not a
 person. Zombies are not Turing emulable.


No, zombies *that are persons in every aspect* are impossible. Not only not
turing emulable... they are absurd.



 If we run the zombie argument backwards then, at what substitution
 level of zombiehood does a (completely possible) simulated person
 become an (non-Turing emulable) unconscious puppet? How bad of a
 simulation does it have to be before becoming an impossible zombie?

 This to me reveals an absurdity of arithmetic realism. Pinocchio the
 boy is possible to simulate mechanically, but Pinocchio the puppet is
 impossible.


You conflate two (mayve more) notions of zombie... the only one important
in the zombie argument is this: something that act like a person in
every aspects*** but nonetheless is not conscious... If it is indeed what
you mean, then could you devise a test that could show that the zombie
indeed lacks consciousness (remember that *by definition* you cannot tell
apart the zombie and a real conscious person).

Quentin


 Doesn't that strike anyone else as an obvious deal breaker?

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-15 Thread Craig Weinberg

On Jan 15, 3:07 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/1/14 Craig Weinberg whatsons...@gmail.com

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist, therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that it is
  impossible to create something that acts like a person which is not a
  person. Zombies are not Turing emulable.

 No, zombies *that are persons in every aspect* are impossible. Not only not
 turing emulable... they are absurd.

If you define them that way then the word has no meaning. What is a
person in every aspect that is not at all a person? The only way the
term has meaning is when it is used to define something that appears
to be a person in every way to an outside observer (and that would
ultimately have to be a human observer) but has no interior
experience. That is not absurd at all, and in fact describes
animation, puppetry, and machine intelligence.




  If we run the zombie argument backwards then, at what substitution
  level of zombiehood does a (completely possible) simulated person
  become an (non-Turing emulable) unconscious puppet? How bad of a
  simulation does it have to be before becoming an impossible zombie?

  This to me reveals an absurdity of arithmetic realism. Pinocchio the
  boy is possible to simulate mechanically, but Pinocchio the puppet is
  impossible.

 You conflate two (mayve more) notions of zombie... the only one important
 in the zombie argument is this: something that act like a person in
 every aspects*** but nonetheless is not conscious... If it is indeed what
 you mean, then could you devise a test that could show that the zombie
 indeed lacks consciousness (remember that *by definition* you cannot tell
 apart the zombie and a real conscious person).

No, I think that I have a workable and useful notion of zombie. I'm
not sure how the definition you are trying use is meaningful. It seems
like a straw man of the zombie issue. We already know that
subjectivity is private, what we don't know is whether that means that
simulations automatically acquire consciousness or not. The zombie
issue is not to show that we can't imagine a person without
subjectivity and see that as evidence that subjectivity must
inherently arise from function. My point is that it also must mean
that we cannot stop inanimate objects from acquiring consciousness if
they are a sufficiently sophisticated simulation.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-15 Thread Quentin Anciaux
2012/1/16 Craig Weinberg whatsons...@gmail.com


 On Jan 15, 3:07 pm, Quentin Anciaux allco...@gmail.com wrote:
  2012/1/14 Craig Weinberg whatsons...@gmail.com
 
   Thought I'd throw this out there. If computationalism argues that
   zombies can't exist, therefore anything that we cannot distinguish
   from a conscious person must be conscious, that also means that it is
   impossible to create something that acts like a person which is not a
   person. Zombies are not Turing emulable.
 
  No, zombies *that are persons in every aspect* are impossible. Not only
 not
  turing emulable... they are absurd.

 If you define them that way then the word has no meaning. What is a
 person in every aspect that is not at all a person?


The *only thing* a zombie lacks is consciousness... every other aspects of
a persons, it has it.


 The only way the
 term has meaning is when it is used to define something that appears
 to be a person in every way to an outside observer (and that would
 ultimately have to be a human observer) but has no interior
 experience. That is not absurd at all, and in fact describes
 animation, puppetry, and machine intelligence.


Puppetries, animations do not act like a person. They act like puppetries,
animations. A philosophical zombie *acts like a person but lacks
consciousness*.


 
 
 
   If we run the zombie argument backwards then, at what substitution
   level of zombiehood does a (completely possible) simulated person
   become an (non-Turing emulable) unconscious puppet? How bad of a
   simulation does it have to be before becoming an impossible zombie?
 
   This to me reveals an absurdity of arithmetic realism. Pinocchio the
   boy is possible to simulate mechanically, but Pinocchio the puppet is
   impossible.
 
  You conflate two (mayve more) notions of zombie... the only one important
  in the zombie argument is this: something that act like a person in
  every aspects*** but nonetheless is not conscious... If it is indeed what
  you mean, then could you devise a test that could show that the zombie
  indeed lacks consciousness (remember that *by definition* you cannot tell
  apart the zombie and a real conscious person).

 No, I think that I have a workable and useful notion of zombie. I'm
 not sure how the definition you are trying use is meaningful. It seems
 like a straw man of the zombie issue. We already know that
 subjectivity is private, what we don't know is whether that means that
 simulations automatically acquire consciousness or not. The zombie
 issue is not to show that we can't imagine a person without
 subjectivity and see that as evidence that subjectivity must
 inherently arise from function. My point is that it also must mean
 that we cannot stop inanimate objects from acquiring consciousness if
 they are a sufficiently sophisticated simulation.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Jason Resch
On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg whatsons...@gmail.comwrote:

 Thought I'd throw this out there. If computationalism argues that
 zombies can't exist,


I think the two ideas zombies are impossible and computationalism are
independent.  Where you might say they are related is that a disbelief in
zombies yields a strong argument for computationalism.


 therefore anything that we cannot distinguish
 from a conscious person must be conscious, that also means that it is
 impossible to create something that acts like a person which is not a
 person. Zombies are not Turing emulable.


I think there is a subtle difference in meaning between it is impossible
to create something that acts like a person which is not a person and
saying Zombies are not Turing emulable.  It is important to remember that
the non-possibility of zombies doesn't imply a particular person or thing
cannot be emulated, rather it means there is a particular consequence of
certain Turing emulations which is unavoidable, namely the
consciousness/mind/person.




 If we run the zombie argument backwards then, at what substitution
 level of zombiehood does a (completely possible) simulated person
 become an (non-Turing emulable) unconscious puppet? How bad of a
 simulation does it have to be before becoming an impossible zombie?

 This to me reveals an absurdity of arithmetic realism. Pinocchio the
 boy is possible to simulate mechanically, but Pinocchio the puppet is
 impossible. Doesn't that strike anyone else as an obvious deal breaker?



Not every Turing emulable process is necessarily conscious.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread meekerdb

On 1/14/2012 11:38 AM, Craig Weinberg wrote:

Thought I'd throw this out there. If computationalism argues that
zombies can't exist, therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.


No. It only follows that zombies are not Turing emulable unless people are too. But why 
would you suppose people are not emulable?


Brent



If we run the zombie argument backwards then, at what substitution
level of zombiehood does a (completely possible) simulated person
become an (non-Turing emulable) unconscious puppet? How bad of a
simulation does it have to be before becoming an impossible zombie?

This to me reveals an absurdity of arithmetic realism. Pinocchio the
boy is possible to simulate mechanically, but Pinocchio the puppet is
impossible. Doesn't that strike anyone else as an obvious deal breaker?



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Craig Weinberg
On Jan 14, 4:41 pm, Jason Resch jasonre...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg whatsons...@gmail.comwrote:

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist,

 I think the two ideas zombies are impossible and computationalism are
 independent.  Where you might say they are related is that a disbelief in
 zombies yields a strong argument for computationalism.

I don't think that it's possible to say that any two ideas 'are'
independent from each other. All ideas can be related through semantic
association, however distant. As far as your point though, of course I
see the opposite relation - while admitting even the possibility of
zombies suggests computationalism is founded on illusion., but a
disbelief in zombies gives no more support for computationalism than
it does for materialism or panpsychism.


  therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that it is
  impossible to create something that acts like a person which is not a
  person. Zombies are not Turing emulable.

 I think there is a subtle difference in meaning between it is impossible
 to create something that acts like a person which is not a person and
 saying Zombies are not Turing emulable.  It is important to remember that
 the non-possibility of zombies doesn't imply a particular person or thing
 cannot be emulated, rather it means there is a particular consequence of
 certain Turing emulations which is unavoidable, namely the
 consciousness/mind/person.

That's true, in the sense that emulable can only refer to a specific
natural and real process being emulated rather than a fictional one.
You have a valid point that the word emulable isn't the best term, but
it's a red herring since the point I was making is that it would not
be possible to avoid creating sentience in any sufficiently
sophisticated cartoon, sculpture, or graphic representation of a
person. Call it emulation, simulation, synthesis, whatever, the result
is the same. You can't make a machine that acts like a person without
it becoming a person automatically. That clearly is ridiculous to me.




  If we run the zombie argument backwards then, at what substitution
  level of zombiehood does a (completely possible) simulated person
  become an (non-Turing emulable) unconscious puppet? How bad of a
  simulation does it have to be before becoming an impossible zombie?

  This to me reveals an absurdity of arithmetic realism. Pinocchio the
  boy is possible to simulate mechanically, but Pinocchio the puppet is
  impossible. Doesn't that strike anyone else as an obvious deal breaker?

 Not every Turing emulable process is necessarily conscious.

Why not? What makes them unconscious? You can't draw the line in one
direction but not the other. If you say that anything that seems to
act alive well enough must be alive, then you also have to say that
anything that does not seem conscious may just be poorly programmed.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Craig Weinberg
On Jan 14, 4:55 pm, meekerdb meeke...@verizon.net wrote:
 On 1/14/2012 11:38 AM, Craig Weinberg wrote:

  Thought I'd throw this out there. If computationalism argues that
  zombies can't exist, therefore anything that we cannot distinguish
  from a conscious person must be conscious, that also means that it is
  impossible to create something that acts like a person which is not a
  person. Zombies are not Turing emulable.

 No. It only follows that zombies are not Turing emulable unless people are 
 too. But why
 would you suppose people are not emulable?

No, I'm assuming for the sake of argument that people are Turing
emulable, but my point is that the proposition that zombies are
impossible means that no Turing simulation of consciousness is
possible that is not actually conscious. It means that I can't make a
Pinocchio program because the 'before' puppet and the 'after' boy must
be the same thing - a boy. There can be no sophisticated, interactive
puppets in computationalism.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread meekerdb

On 1/14/2012 7:44 PM, Craig Weinberg wrote:

On Jan 14, 4:55 pm, meekerdbmeeke...@verizon.net  wrote:

On 1/14/2012 11:38 AM, Craig Weinberg wrote:


Thought I'd throw this out there. If computationalism argues that
zombies can't exist, therefore anything that we cannot distinguish
from a conscious person must be conscious, that also means that it is
impossible to create something that acts like a person which is not a
person. Zombies are not Turing emulable.

No. It only follows that zombies are not Turing emulable unless people are too. 
But why
would you suppose people are not emulable?

No, I'm assuming for the sake of argument that people are Turing
emulable, but my point is that the proposition that zombies are
impossible means that no Turing simulation of consciousness is
possible that is not actually conscious. It means that I can't make a
Pinocchio program because the 'before' puppet and the 'after' boy must
be the same thing - a boy. There can be no sophisticated, interactive
puppets in computationalism.


Right, not if they are as sophisticated and interactive as humans and animals we take to 
be conscious.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness Easy, Zombies Hard

2012-01-14 Thread Jason Resch
On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg whatsons...@gmail.comwrote:

 On Jan 14, 4:41 pm, Jason Resch jasonre...@gmail.com wrote:
  On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg whatsons...@gmail.com
 wrote:
 
   Thought I'd throw this out there. If computationalism argues that
   zombies can't exist,
 
  I think the two ideas zombies are impossible and computationalism are
  independent.  Where you might say they are related is that a disbelief in
  zombies yields a strong argument for computationalism.

 I don't think that it's possible to say that any two ideas 'are'
 independent from each other.


Okay.  Perhaps 'independent' was not an ideal term, but computationalism is
at least not dependent on an argument against zombies, as far as I am aware.


 All ideas can be related through semantic
 association, however distant. As far as your point though, of course I
 see the opposite relation - while admitting even the possibility of
 zombies suggests computationalism is founded on illusion., but a
 disbelief in zombies gives no more support for computationalism than
 it does for materialism or panpsychism.


If one accepts that zombies are impossible, then to reject computationalism
requires also rejecting the possibility of Strong AI (
https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI ).



 
   therefore anything that we cannot distinguish
   from a conscious person must be conscious, that also means that it is
   impossible to create something that acts like a person which is not a
   person. Zombies are not Turing emulable.
 
  I think there is a subtle difference in meaning between it is impossible
  to create something that acts like a person which is not a person and
  saying Zombies are not Turing emulable.  It is important to remember
 that
  the non-possibility of zombies doesn't imply a particular person or thing
  cannot be emulated, rather it means there is a particular consequence of
  certain Turing emulations which is unavoidable, namely the
  consciousness/mind/person.

 That's true, in the sense that emulable can only refer to a specific
 natural and real process being emulated rather than a fictional one.
 You have a valid point that the word emulable isn't the best term, but
 it's a red herring since the point I was making is that it would not
 be possible to avoid creating sentience in any sufficiently
 sophisticated cartoon, sculpture, or graphic representation of a
 person. Call it emulation, simulation, synthesis, whatever, the result
 is the same.


I think you and I have different mental models for what is entailed by
emulation, simulation, synthesis.  Cartoons, sculptures, recordings,
projections, and so on, don't necessarily compute anything (or at least,
what they might depict as being computed can have little or no relation to
what is actually computed by said cartoon, sculpture, recording,
projection...  For actual computation you need counterfactuals conditions.
A cartoon depicting an AND gate is not required to behave as a genuine AND
gate would, and flashing a few frames depicting what such an AND gate might
do is not equivalent to the logical decision of an AND gate.


 You can't make a machine that acts like a person without
 it becoming a person automatically. That clearly is ridiculous to me.


What do you think about Strong AI, do you think it is possible?  If so, if
the program that creates a strong AI were implemented on various
computational substrates, silicon, carbon nanotubes, pen and paper, pipes
and water, do you think any of them would yield a mind that is conscious?
If yes, do you think the content of that AI's consciousness would differ
depending on the substrate?  And finally, if you believe at least some
substrates would be conscious, are there any cases where the AI would
respond or behave differently on one substrate or the other (in terms of
the Strong AI program's output) when given equivalent input?



 
 
 
   If we run the zombie argument backwards then, at what substitution
   level of zombiehood does a (completely possible) simulated person
   become an (non-Turing emulable) unconscious puppet? How bad of a
   simulation does it have to be before becoming an impossible zombie?
 
   This to me reveals an absurdity of arithmetic realism. Pinocchio the
   boy is possible to simulate mechanically, but Pinocchio the puppet is
   impossible. Doesn't that strike anyone else as an obvious deal breaker?
 
  Not every Turing emulable process is necessarily conscious.

 Why not? What makes them unconscious?


In my guess, it would be a lack of sophistication.  For example, one
program might simply consist of a for loop iterating from 1 to 10.  Is this
program conscious?  I don't know, but it almost certainly isn't conscious
in the way you or I are.


 You can't draw the line in one
 direction but not the other. If you say that anything that seems to
 act alive well enough must be alive, then you also have to say that
 anything that does