On Wed, Jul 27, 2011 at 6:44 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> On Jul 27, 3:43 pm, Jason Resch <jasonre...@gmail.com> wrote:
> > Ignore inner experiences for the purposes of answering this question.
> > (for now at least)
> >
> > Assume there is a program which chooses the exactly correct YouTube
> > videos to display in response to any given question, and it chooses
> > the videos so well that not even your best friend could distinguish
> > between it and a live skype call with you.
> >
> > My question to you is: given the YouTube video responses give the
> > appearence of an intelligent person, does that imply that an
> > intelligent process must be involved?  Or, can something behave
> > intelligently without any intelligent process being involved?
> That's what I'm saying, the fact that it would be theoretically
> possible to make a YouTube bot like that (it's only a matter of how
> long it would have to run well to fool someone. If the YouTuring test
> only lasts 20 seconds, it wouldn't be that hard to compete against a
> live YouTube) means that appearances of intelligence don't mean
> anything other than that's how it appears to a particular observer
> using a particular method of observation.

Say we made a youtube program that could be interviewed for years by your
friends and family, people on this list, and so forth, and it could generate
responses similar enough to what you would do and say in the same
circumstance that no one could tell the difference.  Would the intelligence
of this process not have to (at a minimum) understand your intelligence at
some level, in order to replicate it?

I think a smart process can emulate a stupid one, but I don't see any way
for a stupid process to emulate a smart one.  Unless you doubt the
possibility of strong ai ( http://en.wikipedia.org/wiki/Strong_AI ), then
you should accept the possibility of intelligent machines made of
non-organic material, which can reproduce the externally visible behavior of
any intelligent being.

If you are in agreement so far, that intelligence, even human intelligence
can be replicated by a mechanical process, you must then question whether or
not it is possible for such an intelligence to exist if it does not have
knowledge, beliefs, understanding, etc.

This is closely related to the other thought experiment, which presume
zombies that act like they can see, think that they can see, and believe
that they can see (but in truth they are blind).  If a zombie believes it
can see, what makes its belief false but your belief that you can see true?
It is wrong for reasons no one can ever prove or demonstrate.  The process
of lying in a brain is different from telling the truth, yet a mechanical
brain with the same neural network would have identical patterns of thought
that would not be consistent with lying.  To believe in zombies is to
believe in the rationality of these consequences.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to