--- Quasar Strider <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> I see several possible avenues for implementing a self-aware machine which
> can pass the Turing test: i.e. human level AI. Mechanical and Electronic.
> However, I see little purpose in doing this. Fact is, we already have self
> aware machines which can pass the Turing test: Humans beings.

This was not Turing's goal, nor is it the direction that AI is headed. 
Turing's goal was to define artificial intelligence.  The question of whether
consciousness can exist in a machine has been debated since the earliest
computers.  Either machines can be conscious or consciousness does not exist. 
The human brain is programmed through DNA to believe in the existence its own
consciousness and free will, and to fear death.  It is simply a property of
good learning algorithms to behave as if they had free will, a balance between
exploitation for immediate reward and exploration for the possibility of
gaining knowledge for greater future reward.  Animals without these
characteristics did not pass on their DNA.  Therefore you have them.

Turing avoided the controversial question of consciousness by equating
intelligence to the appearance of intelligence.  It is not the best test of
intelligence, but it seems to be the only one that people can agree on.

The goal of commercial AI is not to create humans, but to solve the remaining
problems that humans can still do better than computers, such as language and
vision.  You see Google making progress in these areas, but I don't think you
would ever confuse Google with a human.

> We do not need direct neural links to our brain to download and upload
> childhood memories.

I agree this is a great risk.  The motivation to upload is driven by fear of
death and our incorrect but biologically programmed belief in consciousness. 
The result will be the extinction of human life and its replacement with
godlike intelligence, possibly this century.  The best we can do is view this
as a good thing, because the alternative -- a rational approach to our own
intelligence -- would result in extinction with no replacement.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39571188-7e5cf6

Reply via email to