On Thu, Aug 23, 2012 at 7:50 PM, Aaron Hosford <[email protected]> wrote:
>
> Who said anything about not having control over them? It wasn't me!

Then what do you mean by autonomous thinking and decision making?

> Google doesn't understand me the way you or anyone else on this list does.
> It's a shallow version of understanding. Even when you misunderstand me,
> it'll still be a better understanding of me than Google can achieve right
> now. But you're right: it really is getting better at understanding natural
> language, due to the efforts of people like myself.

Do you work at Google? What is your area of research?

> Right now, people are better modelers of each others' minds than any
> software out there

That's right, but computers are doubling in power every 1.5 years,
equivalent to a million years of evolution of the human brain. Maybe
you spend several hours a day interacting with computers, maybe more
than any single person. If those computers have enough knowledge and
computing power (I hesitate to use the word "intelligence"), then it
is possible they could learn to know you better than any other person
besides yourself.

> but that doesn't mean they have uploads of each other
> living in their heads. You're thinking of a replica, which is far more
> complex than a mere model.

There are two reasons for uploading. We don't want to die, and we
grieve the death of others. To convince you that an upload is really
the same person that it imitates, the model only has to be close
enough that you can't tell the difference. You see your dead relatives
resurrected, or you see your friends undergo a procedure where they
come out younger, stronger, smarter, and happier.

Some people are concerned about the details of the procedure. If I
described it like Hayworth in
http://brainpreservation.org/content/killed-bad-philosophy then you
might agree. If the procedure instead was to present you with a robot
that looks and acts like you and hand you a gun so you could shoot
yourself to complete the upload, then you would probably refuse. It
doesn't matter that the final result is the same. What matters is your
beliefs. If I have a model of your mind, ready to implement as an
upload, then I could run simulations first to find a scenario that you
would accept.

Anyway, I don't want to divert this thread to a philosophical argument
about uploading and consciousness. This subject has already been beat
to death. The original thread was about AI safety. I think giving
human rights to robots is a very bad idea, at least from the
perspective of carbon-based life.

> Naturally, big budgets mean a leg up, as with any difficult endeavor. That
> really says nothing about whether they're taking the right direction, but
> rather says a lot about the speed they can travel in the direction they've
> selected.

The two examples that come to mind are Watson (language processing),
and Google's recent use of unsupervised learning by neural networks to
visually recognize cat faces. Both require several thousand processors
and terabytes of memory. Why can't we do this with less computing
power? For that matter, if human intelligence could be implemented in
a computer with the power of an insect brain, then why did we evolve
such large, inefficient brains?

> As for the lack of success so far in finding efficient
> implementations, if everyone quit just because past attempts failed, no one
> would ever succeed. I don't intend to count on luck. I'm using my knowledge,
> reasoning, intuition, and hard work to move forward. I know that I'm making
> progress, whether or not naysayers with no personal ambitions of their own
> can see it. I like *accomplishing* things, not sitting back and telling
> everyone else they're going to fail.

Others (including some on this list) are using a similar argument to
justify spending years trying to prove that P = NP. Just saying...


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to