On Jul 28, 12:02 am, Jason Resch <jasonre...@gmail.com> wrote:

> Say we made a youtube program that could be interviewed for years by your
> friends and family, people on this list, and so forth, and it could generate
> responses similar enough to what you would do and say in the same
> circumstance that no one could tell the difference.  Would the intelligence
> of this process not have to (at a minimum) understand your intelligence at
> some level, in order to replicate it?

No, not at all. It's like saying that a copy machine would have to
understand Chinese to be able to copy all of the pages in all of the
books of a library in China. Intelligence doesn't rub off on an a-
signifying machine, even if that machine is extremely robust. A glove
doesn't grow a hand inside of it just because it's shaped like a hand.
This is what I'm talking about with ACME-OMMM. The exterior of the
cosmos works in the opposite way of the interior. It's a completely
different kind of sense, but the two topologies overlap - as
sensation<>electromagnetism and through fusion><transcendence.

> I think a smart process can emulate a stupid one, but I don't see any way
> for a stupid process to emulate a smart one.  Unless you doubt the
> possibility of strong ai (http://en.wikipedia.org/wiki/Strong_AI), then
> you should accept the possibility of intelligent machines made of
> non-organic material, which can reproduce the externally visible behavior of
> any intelligent being.

Think of the burning log example. I think that feeling is fairly
analogous to fire in this example. Should I accept the possibility of
intelligent machines made of non-organic material being able to
someday reproduce the heat and flame of fire so well that we can toast
marshmallows over it? What a computer can 'reproduce' is 100%
dependent upon it's human interface. If you build a monitor that has a
hot pixels, then you can get some heat out of a picture of fire
recorded in heatvision. If not, there's just the image. With no
monitor, there's nothing.

If, as I'm suggesting, human feeling is a function of biochemical
interiority and not arithmetic, you would have to, at some point, use
organic-like materials to get organic feeling to drive organic
behavior. Think of it like a fractal, no matter how deep you go into
the design, it's still the same thing. A picture of water is more and
more like a picture and less and less like water the more that you can
examine it. It's just a matter of time before any inorganic material
reveals it's lack of feeling and reliance on canned subroutines rather
than sensorimotive participation at the biochemical level.

> If you are in agreement so far, that intelligence, even human intelligence
> can be replicated by a mechanical process,

No. The aspects of intelligence which can be replicated by a
mechanical process are only superficial services, by, for, and of
human organic intelligence. On it's own such a mechanism replicates
nothing. It just cycles through meaningless patterns of semiconductor
circuitry. Can we see our own intelligence reflected in an inanimate
system? Sure, if we choose to. I can imagine that Watson or a talking
teddy bear is sentient if I want. Neither of them will ever be able to
imagine anything though.

> This is closely related to the other thought experiment, which presume
> zombies that act like they can see, think that they can see, and believe
> that they can see (but in truth they are blind).  If a zombie believes it
> can see, what makes its belief false but your belief that you can see true?

If a zombie can believe something then it's not a zombie. You're
answering your own question. Beliefs do not have to correspond to
anything external to be true. Therefore nothing external necessarily
corresponds to an internal belief, however internal beliefs can and do
drive external behaviors. If you worship Ganesha, you might eventually
wear something with Ganesha on it, but putting a Ganesha T-Shirt on a
mannequin doesn't give it a belief in Hindu gods. Even a really sporty
audioanamatronic mannequin with eyes that seem to follow you. That I
can see is not a belief. It doesn't need any external 'truth' to
validate it. It is a self-evident presentation. I may not be able to
see what others see, but that's something else. Maybe they're deluded
and only think they can see.

> It is wrong for reasons no one can ever prove or demonstrate.  The process
> of lying in a brain is different from telling the truth, yet a mechanical
> brain with the same neural network would have identical patterns of thought
> that would not be consistent with lying.

It can't have identical patterns of thought unless it is a physically
identical brain. If I walk down the Champs-Élysées I'm just walking in
a straight line. I can walk the identical straight line pattern
through a junkyard, but I have not replicated the Champs-Élysées.
Neurology is like the four dimensional shadow of perception, which is
a completely different four dimensions, organized not through physical
patterns but semantic patterns which can manifest throughout the
nervous system and even beyond it.

>To believe in zombies is to
> believe in the rationality of these consequences.

You're not seeing what I'm pointing out. I understand where you're
coming from and once held the same view that you have. If we look at
what really is going on in our own experience though, rather than
trying to make sense out of it using only linear logic, we can see
that there is more to being a person than can be represented
symbolically. There is no substitute for the ontology of experience
(repeat 100,000 times).


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to