There is a big difference between autonomy and lack of control. People are
autonomous, but they control each other all the time, despite no one
actually wanting to be controlled. With an artificial system, we would have
the added advantage of all kinds of control handles intentionally built in
to both the software and the hardware.

I fully agree with your statement that it would be a bad idea it would be
to give artificial intelligences human rights. They will not be human. They
will be constructs built by humans for a purpose, and strictly in service
to humans. The purpose and meaning of a human is intensional, in that the
source of a person's purpose and meaning arises from within. Any machine
built by humans has an extensional purpose and meaning which is derived
from its maker(s). The only  intensional  purpose I care about is that of a
human being. The intensional purpose of an AI, if it develops one, is if no
consequence to me except in how it affects that AI's behavior. What it
comes down to is this: I judge it unethical to harm a human being not
because of mere sentience, but because of the combination of sentience and
humanity. I'm a human chauvinist and I make no bones about it. By all
means, treat animals and even artificial sentiences with respect and cause
no unnecessary harm, but their needs and wishes must *always* come second
after those of humans.

As for your argument for processing power, you may be right that more is
needed to accomplish true AGI. But that won't (and shouldn't!) stop us from
working on the problem now in preparation for when that processing power is
available.

I've played with the P = NP problem off and on, when I get bored. I've had
some ideas that felt right but led nowhere. But there's a difference
between that and my AI research: I never take a step forward with P = NP,
but I am constantly improving my designs with AI. That's why I won't be
dissuaded by someone who isn't involved in the research saying it can't be
done. My answer is, it's only a matter of time, because the ratchet of
progress keeps clicking.



On Thu, Aug 23, 2012 at 8:37 PM, Matt Mahoney <[email protected]>wrote:

> On Thu, Aug 23, 2012 at 7:50 PM, Aaron Hosford <[email protected]>
> wrote:
> >
> > Who said anything about not having control over them? It wasn't me!
>
> Then what do you mean by autonomous thinking and decision making?
>
> > Google doesn't understand me the way you or anyone else on this list
> does.
> > It's a shallow version of understanding. Even when you misunderstand me,
> > it'll still be a better understanding of me than Google can achieve right
> > now. But you're right: it really is getting better at understanding
> natural
> > language, due to the efforts of people like myself.
>
> Do you work at Google? What is your area of research?
>
> > Right now, people are better modelers of each others' minds than any
> > software out there
>
> That's right, but computers are doubling in power every 1.5 years,
> equivalent to a million years of evolution of the human brain. Maybe
> you spend several hours a day interacting with computers, maybe more
> than any single person. If those computers have enough knowledge and
> computing power (I hesitate to use the word "intelligence"), then it
> is possible they could learn to know you better than any other person
> besides yourself.
>
> > but that doesn't mean they have uploads of each other
> > living in their heads. You're thinking of a replica, which is far more
> > complex than a mere model.
>
> There are two reasons for uploading. We don't want to die, and we
> grieve the death of others. To convince you that an upload is really
> the same person that it imitates, the model only has to be close
> enough that you can't tell the difference. You see your dead relatives
> resurrected, or you see your friends undergo a procedure where they
> come out younger, stronger, smarter, and happier.
>
> Some people are concerned about the details of the procedure. If I
> described it like Hayworth in
> http://brainpreservation.org/content/killed-bad-philosophy then you
> might agree. If the procedure instead was to present you with a robot
> that looks and acts like you and hand you a gun so you could shoot
> yourself to complete the upload, then you would probably refuse. It
> doesn't matter that the final result is the same. What matters is your
> beliefs. If I have a model of your mind, ready to implement as an
> upload, then I could run simulations first to find a scenario that you
> would accept.
>
> Anyway, I don't want to divert this thread to a philosophical argument
> about uploading and consciousness. This subject has already been beat
> to death. The original thread was about AI safety. I think giving
> human rights to robots is a very bad idea, at least from the
> perspective of carbon-based life.
>
> > Naturally, big budgets mean a leg up, as with any difficult endeavor.
> That
> > really says nothing about whether they're taking the right direction, but
> > rather says a lot about the speed they can travel in the direction
> they've
> > selected.
>
> The two examples that come to mind are Watson (language processing),
> and Google's recent use of unsupervised learning by neural networks to
> visually recognize cat faces. Both require several thousand processors
> and terabytes of memory. Why can't we do this with less computing
> power? For that matter, if human intelligence could be implemented in
> a computer with the power of an insect brain, then why did we evolve
> such large, inefficient brains?
>
> > As for the lack of success so far in finding efficient
> > implementations, if everyone quit just because past attempts failed, no
> one
> > would ever succeed. I don't intend to count on luck. I'm using my
> knowledge,
> > reasoning, intuition, and hard work to move forward. I know that I'm
> making
> > progress, whether or not naysayers with no personal ambitions of their
> own
> > can see it. I like *accomplishing* things, not sitting back and telling
> > everyone else they're going to fail.
>
> Others (including some on this list) are using a similar argument to
> justify spending years trying to prove that P = NP. Just saying...
>
>
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to