g 9, 11:26 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Wed, Aug 10, 2011 at 12:50 AM, Craig Weinberg <whatsons...@gmail.com> 
> wrote:
> > It does mean that a machine can't behave just like a living thing,
> > because everything that a machine is, and everything that a living
> > thing is, are behaviors and experiences. You can't assume that two
> > completely different things have the same behaviors and experiences
> > just because the behaviors that you think you observer seem like what
> > you expect.
> I'm asking about observable behaviours, not experiences. Why do you
> always conflate the two while arguing that they should not be
> conflated?

Observable to whom? All observations are experiences. My experiences
drive my behaviors. Your perception of my behaviors do not reveal my
experiences, except to the extent that they are isomorphic to your own
range of possible experiences. Behaviors and experiences should be
relatively conflated from the 1p view, relatively separate from 3p.
Neither perception/experience nor behavior is absolutely objective or
subjective, which is why you could likely tell the difference between
a person and a machine eventually.

> >> Everything a human does is determined by genetics and environment.
> >> Without the genetic or environmental programming, a human won';t ever
> >> learn, grow or change himself.
> > That's an unfounded assumption. Conjoined twins have the same genetics
> > and environment yet they are different people with different
> > personalities. A dead body has the same genetics and environment as a
> > living person, yet it doesn't learn or grow.
> Conjoined twins don't have the same environment since they are
> spatially separated, made of different matter. A dead body also has a
> different environment to a living body, since the chemical reactions
> inside it are very different. Genetics in conjunction with environment
> determines what sort of body and brain a being will have. what else
> could there possibly be?

Genetics and environment are both determined themselves by other
factors. The fact that a molecule produces a living cell instead of
rust on a nail is determined by the conditions of the cosmos, some of
which are set and passively draw the universe into entropy through
space, and some of which are dynamic and actively push the universe
into significance through time. That's how there can be novelty, life,
feeling, consciousness. There is no deterministic purpose for those
phenomena and therefore they do not arise out of 'organization' or
mechanical logic alone. They exist because the universe is partly
alive and aware and wants to learn, change, and grow.

> >> Are you now saying that your assumption that consciousness does not
> >> necessarily follow from conscious-like behaviour is a priori absurd??
> >> So if a machine can behave like a human then it must have the same
> >> consciousness as a human, and to you this is now obvious a priori??
> > Ugh. There is no such thing as conscious like behavior. Again. That is
> > my point. If I am a cockroach, then cockroaches seem to behave like
> > they are conscious to me and human beings are forces of nature. I can
> > only think that this insight is not accessible to everyone because
> > only some people seem to be capable of getting it and just overlook it
> > over and over again. It is critically important to understand this
> > point or everything that follows will be a strawman distortion of my
> > position.
> Can you explain again what you think is a priori absurd?

the assumption that 'behavior' can be (completely) separated from
'consciousness' - or that they can be completely conflated. I'm saying
they are like an intersecting Venn diagram, and that 'consciousness'
is an elaboration of awareness, sense, and detection, not a binary
quality that could arise from an abstract logical organization.

> > So, does cockroach-like behavior mean that a machine is a cockroach?
> > Does a wooden duck decoy be the same thing as a duck?
> Cockroach-like behaviour means the thing behaves like a cockroach. If
> cockroaches are conscious (they may be) cockroachlike behaviour means
> the thing behaves like a conscious creature, namely a cockroach.

Does that include being able to digest food like a cockroach, emit
chemical trails like a cockroach? In what ways can a machine deviate
from a cockroach and still be said to 'behave like a cockroach'. It's
a rhetorical question to point out the subjective nature (for the
hundredth time) of observation. There is no level at which a machine
could differ from a cockroach and still 'behave' identically to a
cockroach. We may not tell the difference, but mama cockroach may very
well be able to.

> It isn't actually a cockroach if it is a machine, but just as it can have
> cockroachlike behaviour without being a cockroach, it may have
> cockroachlike consciousness without being a cockroach.

I'm saying it's not possible to have cockroachlike behavior either.
Not objectively. It can be close enough for our satisfaction, but
there is no reason to imagine that the awareness of the thing would be
insect level awareness if it was just a silicon machine.

> >> >> The form of argument is similar to assuming that sqrt(2) is rational
> >> >> and showing that this assumption leads to contradiction, therefore
> >> >> sqrt(2) cannot be rational. The only way to respond to this argument
> >> >> if you disagree is to show that there is some error in the logic,
> >> >> otherwise you *have* to accept it, even if you don't like it and you
> >> >> have conceptual difficulties with irrational numbers.
> >> > No, I don't have to accept it. Consciousness is not accessible with
> >> > mathematical logic alone. When you insist that it must beforehand, you
> >> > poison the result and are forced into absurdity. You cannot prove to
> >> > me that you exist. If you accept that that means you don't exist, then
> >> > you have accepted that your own ability to accept or reject any
> >> > proposition is itself invalid.
> >> No, I can't prove to you that I exist, or that I am conscious, or that
> >> I will pay you back if you lend me money. But I can prove to you that
> >> sqrt(2) is irrational and I can prove to you that if something has
> >> behaviour similar to a conscious thing then it will also have the
> >> consciousness of the conscious thing.
> > You can't prove that you have consciousness but you are going to prove
> > that something else has your consciousness because it acts like you
> > do?
> No, I can prove that something that behaves as I do has a similar
> consciousness to mine. That doesn't mean that I am conscious or that I
> can prove that I am conscious.

How can you prove similarity to something that you can't prove or even
claim to exist in the first place? I'm losing confidence in the
internal consistency of your argument.

> >>You need to be able to follow
> >> the proof in order to point out the error if you don't agree. There
> >> may be an error but simply saying you don't agree is not an argument.
> > The error is that consciousness cannot be proved. It doesn't exist: it
> > insists. Completely different (opposite) epistemology.
> I'm not trying to prove consciousness, only that such consciousness as
> an entity may or may not have will be preserved if the function of its
> brain is preserved.

My view is that the answer to that question depends not only on the
functions of the brain that we assume is important, but the function
of the physical and emotional substrates that those high level
functions arise from: experiences of cellular respiration, birth,
death, healing, etc.

> >> >> As for neurons having a finite set of behaviours, of course they do.
> >> >> It is a theorem in physics that a certain volume of space has an upper
> >> >> limit of information it can 
> >> >> contain:http://en.wikipedia.org/wiki/Bekenstein_bound
> >> > There is no limit to the combinations of behaviors they can have over
> >> > time though. There is a finite alphabet, but there is no limit to the
> >> > possibilities of what can be written. Even the alphabet can be changed
> >> > and expanded within the written text. New, unforeseeable behaviors are
> >> > invented.
> >> No, there is an absolute limit to the behaviours that can be displayed
> >> over time by a brain of finite size.
> > Over how much time? Infinite time = infinite behaviors.
> No, if the matter is finite the number of configurations is finite, so
> after a finite period of time all the possible configurations will be
> exhausted and you will start to repeat.

Are you saying that because there are a fixed number of pixels on your
TV screen, there is a finite number of TV programs that can be watched
in the universe, and that in all possible universes American Idol #38
will inevitably air in it's entirety?

> >>There is only a finite number of
> >> particles in the brain
> > No. The brain is constantly adding, removing, and changing particles.
> > All of our cells are.
> But the brain is finite in size and the number of types of particle is
> finite.

It's not at all. The brain changes sizes. It grows and shrinks and
ages just like any other organ. It varies from moment to moment.

> If you have a finite sentence length (the size of the brain)

The brain has a finite bandwidth at any one time, but given an
unbounded time, there is no limit on the length of text. You have a
finite width of a browser page, but it can be as long as it needs to
be if you keep scrolling down. You could have the entire internet on
one single web page if you had the time to scroll through it.

> and a finite number of letters (the particles making up the brain)
> there is only a finite number of sentences that can be produced. In
> order to have infinite brain states you would have to allow the brain
> to expand infinitely in size.

You're thinking of the brain like a jar or something. It's not. It's
like a self-tinting window.

> >> If mental states supervene on physical states then there can't be more
> >> possible mental states than brain states.
> > Mental states make sense of phenomena outside of the brain, through
> > the brain, just as language communicates through words, inventing new
> > ones as it goes..
> Whatever that means, there can't be more mental states than brain
> states, and there is only a finite number of possible brain states if
> the brain remains finite in size.

That is like arguing that there are only a finite number of possible
books because most books are within a certain size range. Remember
that the brain of homo sapiens grew from something probably the size
of a Rhesus monkey. It can and does grow. That's not to say that it
had to grow to make room for a larger total quantity of possible
mental states, it might be just more bandwidth or storage space.
> >> If the number of possible
> >> brain states is finite then the number of possible mental states is an
> >> equal or smaller finite number (probably much smaller).
> > Neither brain states nor mental states are finite or bound to each
> > other explicitly. Some are bound explicitly, some are not. Think of a
> > venn diagram with the self as the intersection of neurology and
> > experience.
> So can you have a change in mental state without a change in brain
> state? Brain activity would then seem to be superfluous - you do your
> thinking with a disembodied soul.

It's not that there is no change in brain state, it's that the change
does not contain the experience of that change. Likewise, the
experience does not contain the mechanical details of the brain
activity.The brain activity corresponds to mental state, as heads and
tails must always be opposite sides of the coin, but heads is not made
of a combination of tails. It's not a disembodied soul, it's an
embodied person that uses his or her brain to think.

> >> >> So you would say of your friend: "I have known him for twenty years,
> >> >> have had many conversations with him and always considered him very
> >> >> smart, but now that I know he is a robot I realise that all along he
> >> >> was as dumb as a rock".
> >> > Of course. It's not unusual for people to deceive themselves in long
> >> > term relationships. If you had the friend, would you not be fazed at
> >> > all to discover that he is a robot? What if you found out that that he
> >> > reports your every conversation to GoogleBook, and that is programmed
> >> > to replace you and dispose of your body in the river, would you still
> >> > would have faith in his intelligence and your friendship enough to try
> >> > to win him over and talk him out of it?
> >> I'd be surprised if my friend was a robot but if he was intelligent
> >> before I knew he would still be intelligent after I knew. If he tried
> >> to kill me then I would be upset, by I would also be upset if my flesh
> >> and blood friend tried to kill me.
> > So you would find it no different whether it is a lifelong friend who
> > has been betraying you for 20 years versus a robot who was programmed
> > to extract business intelligence from you from the start? You would
> > hold the robot personally responsible and not GoogleBook?
> I don't know why you chose Google Books as an example but if it could
> somehow be intelligent enough to drive a humanlike robot then Google
> Books would be responsible for its actions.

I was making a portmanteau of Google and Facebook - a hypothetical
future mega corp that might be interested in mining personal data with
meatspace cyber-espionage.

But if you hold them responsible, and not the humanlike robot, then
aren't you admitting that the robot isn't personally intelligent?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to