Hi

I made some responses below.

Richard Loosemore wrote:
rg wrote:
Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
   until...they do not have to anymore.

* Should we let them decide for us ?
 If not should we/can we restrict them ?

* Can they feel any empathy for us ?
  If not, again should we try to manipulate/force them to
  act like they do?

* Our society is very dependent on computer systems
 everywhere and its increasing!!!
  Should we let the AGIs have access to the internet ?
 If not is it even possible to restrict an AGI that can think super fast
 is a super genious and also has a lot of raw computer power?
 That most likely can find many solutions to get internet access...
 (( I can give many crazy examples on how if anyone doubts))

* What should we "stupid" organics do to prepare ?
  Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed to do AGI research ? Someone that just pretend to be ethical, someone that just wants the glory and the Nobel price....someone that answers the statement: It is insane With: Oh its just needs
 some adjustment, don't worry :)
* What is the military doing ? Should we raise public awareness to gain insight?
   I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an AGI smarter than us
are few..

- Another AGI
- Total isolation

So anyone thinking about this?

Hi

You should know that there are many people who indeed are deeply concerned about these questions, but opinions differ greatly over what the dangers are and how to deal with them.

This sounds good :)
I have been thinking about these questions for at least the last 20 years, and I am also an AGI developer and cognitive psychologist. My own opinion is based on a great deal of analysis of the motivations of AI systems in general, and AGI systems in particular.

I have two conclusions to offer you.

1) Almost all of the discussion of this issue is based on assumptions about how an AI would behave, and the depressing truth is that most of those assuptions are outrageously foolish. I say this, not to be antagonistic, but because the degree of nonsense talked on this subject is quite breathtaking, and I feel at a loss to express just how ridiculous the situation has become.

It is not just that people make wrong assumptions, it is that people make wrong assumptions very, very loudly: declaring these wrog assumptions to be "obviously true". Nobody does this out of personal ignorance, it is just that our culture is saturated with crazy ideas on the subject.

This is probably true.
Therefore I try to make very few assumptions, except one: They will eventually be much smarter than us.
(If you want I can justify this, based on scalability.)
2) I believe it is entirely possible to build a completely safe AGI. I also beelieve that this completely safe AGI would be the simplest one to build, so it is likley to be built first. Lastly, I believe that it will not matter a great deal who builds the first AGI (within limits) because an AGI will "self-stabilize" toward a benevolent state.

Why is it simplest to make a safe AGI?

Is it not more difficult to make something that is guaranteed to be in some way? Is it not easier to just make something that can potentially be safe, unsafe and whatever.

When you say it will "self-stabilize" toward a benevolent state do you not make a large assumption. It will exist in the same world as humans, do we all stabilize into benevolent states?

Unless you introduce this artificially during the evolutionary process of said AGI. Rewarding certain behavior. But what happens when the AGI realizes it has been designed in an evolutionary process with this goals in mind, what will it do?
We can not know can we ?













Richard Loosemore













-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to