On Aug 8, 8:50 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Mon, Aug 8, 2011 at 11:13 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> > No. You have it backwards from the start. There is no such thing as
> >> > 'behaving like a person'. There is only a person interpreting
> >> > something's behavior as being like a person. There is no power
> >> > emanating from a thing that makes it person-like. If you understand
> >> > this you will know because you will see that the whole question is a
> >> > red herring. If you don't see that, you do not understand what I'm
> >> > saying.
> >> "Interpreting something's behaviour as being like a [person's]" is
> >> what I mean by "behaving like a person".
> > I know that's what you mean, but I'm trying to explain why those two
> > phrases are polar opposites in this context, because the whole thread
> > is about the difference between subjectivity and objectivity. If a
> > chip could behave like a person, then we wouldn't be having this
> > conversation right now. We'd be hanging out with our digital friends
> > instead. Every chip we make would have it's own perspective and do
> > what it wanted to do, like an infant or a pollywog would. If we want
> > to make a chip that impersonates something that does have it's own
> > perspective and does what it wants to, then we can try to do that with
> > varying levels of success depending upon who you are trying to fool,
> > how you are trying to fool them, and for how long. The fact that any
> > particular person interprets the thing as being alive or conscious for
> > some period of time is not the same thing as the thing being actually
> > alive or conscious.
> The chip is not alive because it doesn't meet a definition for life.
> It may or may not be conscious - that isn't obvious and it is what we
> are arguing about. However, it may objectively behave like a living or
> conscious entity. For example, if it seeks food and reproduces it is
> behaving like a living thing even though it isn't, and if it has a
> conversation with you about its feelings and desires it is behaving
> like a conscious thing even though it isn't.

At the top you are saying that there is a definition of life to be
met, but then you are saying that there are behaviors which are
'objectively' living or conscious. The two assertions are mutually
exclusive and both in opposition to my view. If life can be observed
as objective behaviors, then it doesn't need a definition, it just is
observably either alive or it isn't. If it needs a definition then you
admit that life cannot be determined objectively and must be defined
subjectively - guessed at.

What I'm saying is completely different. I am taking the latter view
and going much further to say that not only is life defined
subjectively, but that definition is based upon perceived isomorphism
and as a general principle of all phenomena in the universe. As a
living creature, we recognize other phenomena as other living
creatures to the extent that they remind us of ourselves and our own
behaviors. This would normally serve us well, except when hijacked by
intentional technological impersonations designed to remind us of our
own behaviors.

> I don't think the phrase "does what it wants to do" adds anything to
> the discussion if you say that only a conscious thing can do what it
> wants to do - it is back to arguing whether something is conscious.

We can't say whether a chip does what it wants to do but the fact that
it must be programmed by an outside source if it is to do anything
would suggest that it either cannot do what it wants or that it cannot
want to do much. A chip without firmware or software won't ever learn,
grow, or change itself.

> >> >>Then it would be
> >> >> possible to replace parts of your brain with non-conscious components
> >> >> that function otherwise normally, which would lead to you lacking some
> >> >> important aspect aspect of consciousness but being unaware of it. This
> >> >> is absurd, but it is a corollary of the claim that it is possible to
> >> >> separate consciousness from function. Therefore, the claim that it is
> >> >> possible to separate consciousness from function is shown to be false.
> >> >> If you don't accept this then you allow what you have already admitted
> >> >> is an absurdity.
> >> > It's a strawman of consciousness that is employed in circular
> >> > thinking. You assume that consciousness is a behavior from the
> >> > beginning and then use that fallacy to prove that behavior can't be
> >> > separated from consciousness. Consciousness drives behavior and vice
> >> > versa, but each extends beyond the limits of the other.
> >> No, I do NOT assume that consciousness follows from behaviour (and
> >> certainly not that it IS behaviour) from the beginning!! I've lost
> >> count of the number of times I have said "assume that it has the
> >> behaviour, but not the consciousness, of a brain component". How can I
> >> make it clearer? What other language can I use to convey that the
> >> thing is unconscious but to an external observer, who can't know its
> >> subjective states, it does the same sorts of mechanical things as its
> >> conscious counterpart?
> > Isn't the whole point of the gradual neuron substitution example to
> > prove that consciousness must be behavior? That if behavior of the
> > neurons are the same, and accepted as the same then the conscious
> > experience of the brain as a whole must be the same? Sorry if I'm not
> > getting your position right, and it is a subtle thing to try to
> > dissect. I think the word 'behavior' implies a certain level of
> > normative repetition which is not sufficient to describe the ability
> > of neurological awareness to choose whether to respond in the same way
> > or a new and unpredictable way. When you look at what neurons are
> > actually like, I think the idea of them having a finite set of
> > behaviors is not realistic. It's like saying that because speech can
> > be translated into words and letters, that words and letters should be
> > able to automatically produce the voice of their speakers.
> I *assume* that behaviour and consciousness can be separated and show
> that it leads to absurdity. This means that the initial assumption was
> wrong. If you disagree you can try to show that the assumption does
> not in fact lead to absurdity, but you haven't attempted to do that.
> Instead, you restate your own assumption.

The initial assumption is a prori absurd, so it follows that it's
consequences would be as well.  Consciousness can drive neurological
behavior (voluntary movement) and behavior can drive consciousness
(psychoactive drugs) but consciousness also experiences things that
are not behavior (qualia) and neurons have behaviors that our
consciousness does not experience (we can't count our own neurons from
the inside).

You're trying to frame the question so that it can only be answered
the way that you have set it up to be answered. It's a semantic
argument that has no real connection to the reality of the phenomena
we're talking about. The reality of subjectivity does not fit into
conventional logic. Consciousness is the source of logic, not the
other way around.

> The form of argument is similar to assuming that sqrt(2) is rational
> and showing that this assumption leads to contradiction, therefore
> sqrt(2) cannot be rational. The only way to respond to this argument
> if you disagree is to show that there is some error in the logic,
> otherwise you *have* to accept it, even if you don't like it and you
> have conceptual difficulties with irrational numbers.

No, I don't have to accept it. Consciousness is not accessible with
mathematical logic alone. When you insist that it must beforehand, you
poison the result and are forced into absurdity. You cannot prove to
me that you exist. If you accept that that means you don't exist, then
you have accepted that your own ability to accept or reject any
proposition is itself invalid.

> As for neurons having a finite set of behaviours, of course they do.
> It is a theorem in physics that a certain volume of space has an upper
> limit of information it can 
> contain:http://en.wikipedia.org/wiki/Bekenstein_bound

There is no limit to the combinations of behaviors they can have over
time though. There is a finite alphabet, but there is no limit to the
possibilities of what can be written. Even the alphabet can be changed
and expanded within the written text. New, unforeseeable behaviors are

> The number of mental states it is possible to have is way, way lower
> than the limit placed by the Bekenstein bound, since most possible
> configurations of the matter in the brain do not result in thought,
> and since tiny changes in the configuration of neurons do not result
> in changes in thought or else the brain would be too unstable.

You're assuming that there is a finite definition of a 'mental state'.
There isn't. It's not a computer, it's a living organism discovering
and inventing what has never before been experienced in a particular

> >> >> > The human race has already been supplanted by a superhuman AI. It's
> >> >> > called law and finance.
> >> >> They are not entities and not intelligent, let alone intelligent in
> >> >> the way humans are.
> >> > What make you think that law and finance are any less intelligent than
> >> > a contemporary AI program?
> >> Law and finance are abstractions. A computer may be programmed to
> >> solve financial problems, and then it has a limited intelligence, but
> >> it's incorrect to say that "finance" is therefore intelligent.
> > Computer programming languages are abstractions too. Law and finance
> > are machine logics that program the computer of civilization, and as
> > such, no more or less intelligent than any other machine.
> Law and finance are not "machine logics" or programming languages.

Why not?

> >> > When you say that intelligence can 'fake' non-intelligence, you imply
> >> > an internal experience (faking is not an external phenomenon).
> >> > Intelligence is a broad, informal term. It can mean subjectivity,
> >> > intersubjectivity, or objective behavior, although I would say not
> >> > truly objective but intersubjectively imagined as objective. I agree
> >> > that consciousness or awareness is different from any of those
> >> > definitions of intelligence which would actually be categories of
> >> > awareness. I would not say that a zombie is intelligent. Intelligence
> >> > implies understanding, which is internal. What a computer or a zombie
> >> > has is intelliform mechanism.
> >> If a computer or zombie can solve the same wide range of problems as a
> >> human then it is ipso facto as intelligent as a human. If you discover
> >> that your friend whom you have known for twenty years is actually a
> >> robot you may doubt in the light of this knowledge that he is
> >> conscious, but you can't doubt that he is intelligent, since that is
> >> based purely on your observations of his behaviour and not on internal
> >> state.
> > Yes, that's one usage of the word intelligent, definitely. It's not
> > that simple though if we are getting down to issues of subjectivity
> > and consciousness. A language translator can compare canned
> > definitions of words and spit out correlations which are useful to us
> > as users of the translator, but they are of no use to the translator
> > itself. The machine doesn't care if it's right or wrong, but we do. To
> > me, intelligence has to care whether it's right or wrong. It's not
> > accurate to say that a program which amounts to an interactive
> > dictionary is 'intelligent' but you could casually say that it's
> > intelligent to mean that it's design reflects human intelligence.
> So you would say of your friend: "I have known him for twenty years,
> have had many conversations with him and always considered him very
> smart, but now that I know he is a robot I realise that all along he
> was as dumb as a rock".

Of course. It's not unusual for people to deceive themselves in long
term relationships. If you had the friend, would you not be fazed at
all to discover that he is a robot? What if you found out that that he
reports your every conversation to GoogleBook, and that is programmed
to replace you and dispose of your body in the river, would you still
would have faith in his intelligence and your friendship enough to try
to win him over and talk him out of it?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to