--- rg <[EMAIL PROTECTED]> wrote:

> Hi
> 
> Is anyone discussing what to do in the future when we
> have made AGIs? I thought that was part of why
> the singularity institute was made ?
> 
> Note, that I am not saying we should not make them!
> Because someone will regardless of what we decide.
> 
> I am asking for what should do to prepare for it!
> and also how we should affect the creation of AGIs?
> 
> Here's some questions, I hope I am not the first to come up with.
> 
> * Will they be sane?
> * Will they just be smart enough to pretend to be sane?
>     until...they do not have to anymore.
> 
> * Should we let them decide for us ?
>   If not should we/can we restrict them ?
> 
> * Can they feel any empathy for us ?
>    If not, again should we try to manipulate/force them to
>    act like they do?
> 
> * Our society is very dependent on computer systems
>   everywhere and its increasing!!!
>    Should we let the AGIs have access to the internet ?
>   If not is it even possible to restrict an AGI that can think super fast
>   is a super genious and also has a lot of raw computer power?
>   That most likely can find many solutions to get internet access...
>   (( I can give many crazy examples on how if anyone doubts))
> 
> * What should we "stupid" organics do to prepare ?
>    Reduce our dependency?
> 
> * Should a scientist, that do not have true ethical values be allowed to 
> do AGI research ?
>   Someone that just pretend to be ethical, someone that just wants the 
> glory and the
>   Nobel price....someone that answers the statement: It is insane With: 
> Oh its just needs
>   some adjustment, don't worry :)
>    
> * What is the military doing ? Should we raise public awareness to gain 
> insight?
>     I guess all can imagine why this is important..
> 
> The only answers I have found to what can truly control/restrict an AGI 
> smarter than us
> are few..
> 
> - Another AGI
> - Total isolation
> 
> So anyone thinking about this?

Yes.  These questions are probably more appropriate for the singularity list,
which is concerned with the safety of AI, as opposed to this list, which is
concerned with just getting it to work.  OTOH, maybe there shouldn't be two
lists after all.

Anyway, I expressed my views on the singularity at
http://www.mattmahoney.net/singularity.html
To answer your question, there isn't much we can do (IMHO).  A singularity
will be invisible to the unaugmented human brain, and yet the world will be
vastly different.

As for your other questions, I believe that AI will be distributed over the
internet because this is where the necessary resources are.  No single person
or group will develop it.  Intelligence will come collectively from many
narrowly specialized experts and an infrastructure that routes natural
language messages to the right ones.  I believe this can be implemented with
current technology and an economy where information has negative value and
network peers compete for resources and reputation in a hostile environment. 
I described one proposal here: http://www.mattmahoney.net/agi.html

I believe the system will be friendly (give correct and useful information) as
long as humans remain the primary source of knowledge.  As computing power
gets cheaper and human labor gets more expensive, humans will gradually become
less relevant.  The P2P protocol will evolve from natural language to
something incomprehensible, perhaps in 30 years.  Shortly afterwards, there
will be a singularity.

I do not know how to make this system "safe", nor do I believe that the
question even makes sense.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to