Hello All..

After reading all this wonderful debate on AI morality and Eliezer's People
eating AGI concerns, I'm left wondering this: "Am I the *only* one here who
thinks that the *most* likely scenario is that such a thing as a "universe
devouring" AGI is utterly impossible?"

Everyone here seems to talk about this as if it was inevitable and probable.
Just because we can dream of something, does not mean it can exist anywhere
except our dreams.  For instance, time travel has not been entorely refuted
as of yet, but that doesn't mean it is practically doable in any way.  These
discussions seem especially far fetched given that this damn computer
doesn't have the slightest idea what I am typing in right now or what it
means ;)

I think an AGI is *very* plausible and probably imminent.  I also think
Eliezer is right in that we have to give strong consideration to the ethics
of such a machine as they could be dangerous, if even just economically
dangerous by crashing financial markets or whole countries economies.  They
could also potentially use all our own wonderful killing machines against
us.  But the idea that they will manipulate matter and devour the universe
is ludicrous IMO.  I am much more inclined to believe that an AGI of
tremendous utility will emerge that will be a tool for our use in almost any
scientific\engineering\medical\educational etc etc domain.

If such a thing as a matter manipulating machine were possible, it should
have happened already in this universe.  This leads to one of three
conclusions as far as I can tell:

1) matter manipulating machines of such a grand scale are not possible
2) mmm's are possible, but never actually do such a thing
3) mmm's are possible and they created this current universe as a simulation
ala The Matrix.

My bet is on number 1.  But none of these three are horrible.  Of course, an
AGI could be destructive only on the local level, and that is where we have
to be wary.

I am glad that Ben is working on this and may be closer to succeeding than
anyone else.  I believe he sincerely has altruistic motives and is open
minded enough to consider others thoughts\concerns.  That will mean alot as
this project progresses towards completion..

Kevin



----- Original Message -----
From: "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, February 12, 2003 1:53 PM
Subject: Re: [agi] AI Morality -- a hopeless quest


> Brad Wyble wrote:
>  >
>  > Tell me this, have you ever killed an insect because it bothered you?
>
> "In other words, posthumanity doesn't change the goal posts. Being human
> should still confer human rights, including the right not to be enslaved,
> eaten, etc.. But perhaps being posthuman will confer posthuman rights that
> we understand as little as a dog understands the right to vote."
> -- James Hughes
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to