2008/6/22 Vladimir Nesov <[EMAIL PROTECTED]>:

>
> Two questions:
> 1) Do you know enough to estimate which scenario is more likely?

Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side. So we might find them more easily. I also think I have
solid reasoning to think intelligence exploding is unlikely, which
requires paper length rather than post length. So it I think I do, but
should I trust my own rationality?

Getting a bunch of people together to argue for both paths seems like
a good bet at the moment.

> 2) What does this difference change for research at this stage?

It changes the focus of research from looking for simple principles of
intelligence (that can be improved easily on the fly), to one that
expects intelligence creation to be a societal process over decades.

It also makes secrecy no longer be the default position. If you take
the intelligence explosion scenario seriously you won't write anything
in public forums that might help other people make AI. As bad/ignorant
people might get hold of it and cause the first explosion.

 > Otherwise it sounds like you are just calling to start a cult that
> believes in this particular unsupported thing, for no good reason. ;-)
>

Hope that gives you some reasons. Let me know if I have misunderstood
your questions.

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to