On Aug 9, 9:52 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Tue, Aug 9, 2011 at 12:09 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> The chip is not alive because it doesn't meet a definition for life.
> >> It may or may not be conscious - that isn't obvious and it is what we
> >> are arguing about. However, it may objectively behave like a living or
> >> conscious entity. For example, if it seeks food and reproduces it is
> >> behaving like a living thing even though it isn't, and if it has a
> >> conversation with you about its feelings and desires it is behaving
> >> like a conscious thing even though it isn't.
> > At the top you are saying that there is a definition of life to be
> > met, but then you are saying that there are behaviors which are
> > 'objectively' living or conscious. The two assertions are mutually
> > exclusive and both in opposition to my view. If life can be observed
> > as objective behaviors, then it doesn't need a definition, it just is
> > observably either alive or it isn't. If it needs a definition then you
> > admit that life cannot be determined objectively and must be defined
> > subjectively - guessed at.
> Life can be defined as requiring, for example, certain organic
> reactions. A human can be defined as being born from another human, or
> having human DNA, or whatever. However, that does not mean that a
> machine can't *behave* just like a living thing or just like a human,
> even though by definition it isn't one. Definitions are simply a
> matter of convention. Whether a machine can or can't copy any aspect
> of a living or conscious thing's behaviour, on the other hand, is not
> a matter of convention but a fact to be determined by observation.

It does mean that a machine can't behave just like a living thing,
because everything that a machine is, and everything that a living
thing is, are behaviors and experiences. You can't assume that two
completely different things have the same behaviors and experiences
just because the behaviors that you think you observer seem like what
you expect.

> > What I'm saying is completely different. I am taking the latter view
> > and going much further to say that not only is life defined
> > subjectively, but that definition is based upon perceived isomorphism
> > and as a general principle of all phenomena in the universe. As a
> > living creature, we recognize other phenomena as other living
> > creatures to the extent that they remind us of ourselves and our own
> > behaviors. This would normally serve us well, except when hijacked by
> > intentional technological impersonations designed to remind us of our
> > own behaviors.
> >> I don't think the phrase "does what it wants to do" adds anything to
> >> the discussion if you say that only a conscious thing can do what it
> >> wants to do - it is back to arguing whether something is conscious.
> > We can't say whether a chip does what it wants to do but the fact that
> > it must be programmed by an outside source if it is to do anything
> > would suggest that it either cannot do what it wants or that it cannot
> > want to do much. A chip without firmware or software won't ever learn,
> > grow, or change itself.
> Everything a human does is determined by genetics and environment.
> Without the genetic or environmental programming, a human won';t ever
> learn, grow or change himself.

That's an unfounded assumption. Conjoined twins have the same genetics
and environment yet they are different people with different
personalities. A dead body has the same genetics and environment as a
living person, yet it doesn't learn or grow.

> >> I *assume* that behaviour and consciousness can be separated and show
> >> that it leads to absurdity. This means that the initial assumption was
> >> wrong. If you disagree you can try to show that the assumption does
> >> not in fact lead to absurdity, but you haven't attempted to do that.
> >> Instead, you restate your own assumption.
> > The initial assumption is a prori absurd, so it follows that it's
> > consequences would be as well.  Consciousness can drive neurological
> > behavior (voluntary movement) and behavior can drive consciousness
> > (psychoactive drugs) but consciousness also experiences things that
> > are not behavior (qualia) and neurons have behaviors that our
> > consciousness does not experience (we can't count our own neurons from
> > the inside).
> > You're trying to frame the question so that it can only be answered
> > the way that you have set it up to be answered. It's a semantic
> > argument that has no real connection to the reality of the phenomena
> > we're talking about. The reality of subjectivity does not fit into
> > conventional logic. Consciousness is the source of logic, not the
> > other way around.
> Are you now saying that your assumption that consciousness does not
> necessarily follow from conscious-like behaviour is a priori absurd??
> So if a machine can behave like a human then it must have the same
> consciousness as a human, and to you this is now obvious a priori??

Ugh. There is no such thing as conscious like behavior. Again. That is
my point. If I am a cockroach, then cockroaches seem to behave like
they are conscious to me and human beings are forces of nature. I can
only think that this insight is not accessible to everyone because
only some people seem to be capable of getting it and just overlook it
over and over again. It is critically important to understand this
point or everything that follows will be a strawman distortion of my

So, does cockroach-like behavior mean that a machine is a cockroach?
Does a wooden duck decoy be the same thing as a duck?

> >> The form of argument is similar to assuming that sqrt(2) is rational
> >> and showing that this assumption leads to contradiction, therefore
> >> sqrt(2) cannot be rational. The only way to respond to this argument
> >> if you disagree is to show that there is some error in the logic,
> >> otherwise you *have* to accept it, even if you don't like it and you
> >> have conceptual difficulties with irrational numbers.
> > No, I don't have to accept it. Consciousness is not accessible with
> > mathematical logic alone. When you insist that it must beforehand, you
> > poison the result and are forced into absurdity. You cannot prove to
> > me that you exist. If you accept that that means you don't exist, then
> > you have accepted that your own ability to accept or reject any
> > proposition is itself invalid.
> No, I can't prove to you that I exist, or that I am conscious, or that
> I will pay you back if you lend me money. But I can prove to you that
> sqrt(2) is irrational and I can prove to you that if something has
> behaviour similar to a conscious thing then it will also have the
> consciousness of the conscious thing.

You can't prove that you have consciousness but you are going to prove
that something else has your consciousness because it acts like you

>You need to be able to follow
> the proof in order to point out the error if you don't agree. There
> may be an error but simply saying you don't agree is not an argument.

The error is that consciousness cannot be proved. It doesn't exist: it
insists. Completely different (opposite) epistemology.

> >> As for neurons having a finite set of behaviours, of course they do.
> >> It is a theorem in physics that a certain volume of space has an upper
> >> limit of information it can 
> >> contain:http://en.wikipedia.org/wiki/Bekenstein_bound
> > There is no limit to the combinations of behaviors they can have over
> > time though. There is a finite alphabet, but there is no limit to the
> > possibilities of what can be written. Even the alphabet can be changed
> > and expanded within the written text. New, unforeseeable behaviors are
> > invented.
> No, there is an absolute limit to the behaviours that can be displayed
> over time by a brain of finite size.

Over how much time? Infinite time = infinite behaviors.

>There is only a finite number of
> particles in the brain

No. The brain is constantly adding, removing, and changing particles.
All of our cells are.

> and hence only a finite number of ways in which
> they can be arranged -

No more than language is finite in the number of ways in which it can
express meaning.

> and the vast majority of those arrangements do
> not correlate with a working brain.

> >> The number of mental states it is possible to have is way, way lower
> >> than the limit placed by the Bekenstein bound, since most possible
> >> configurations of the matter in the brain do not result in thought,
> >> and since tiny changes in the configuration of neurons do not result
> >> in changes in thought or else the brain would be too unstable.
> > You're assuming that there is a finite definition of a 'mental state'.
> > There isn't. It's not a computer, it's a living organism discovering
> > and inventing what has never before been experienced in a particular
> > way.
> If mental states supervene on physical states then there can't be more
> possible mental states than brain states.

Mental states make sense of phenomena outside of the brain, through
the brain, just as language communicates through words, inventing new
ones as it goes..

> If the number of possible
> brain states is finite then the number of possible mental states is an
> equal or smaller finite number (probably much smaller).

Neither brain states nor mental states are finite or bound to each
other explicitly. Some are bound explicitly, some are not. Think of a
venn diagram with the self as the intersection of neurology and

> >> So you would say of your friend: "I have known him for twenty years,
> >> have had many conversations with him and always considered him very
> >> smart, but now that I know he is a robot I realise that all along he
> >> was as dumb as a rock".
> > Of course. It's not unusual for people to deceive themselves in long
> > term relationships. If you had the friend, would you not be fazed at
> > all to discover that he is a robot? What if you found out that that he
> > reports your every conversation to GoogleBook, and that is programmed
> > to replace you and dispose of your body in the river, would you still
> > would have faith in his intelligence and your friendship enough to try
> > to win him over and talk him out of it?
> I'd be surprised if my friend was a robot but if he was intelligent
> before I knew he would still be intelligent after I knew. If he tried
> to kill me then I would be upset, by I would also be upset if my flesh
> and blood friend tried to kill me.

So you would find it no different whether it is a lifelong friend who
has been betraying you for 20 years versus a robot who was programmed
to extract business intelligence from you from the start? You would
hold the robot personally responsible and not GoogleBook?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to