2008/8/25 Terren Suydam <[EMAIL PROTECTED]>:
>
> --- On Sun, 8/24/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
>> wrong. This ability might be an end in itself, the whole
>> point of
>> building an AI, when considered as applying to the dynamics
>> of the
>> world as a whole and not just AI aspect of it. After all,
>> we may make
>> mistakes or be swayed by unlucky happenstance in all
>> matters, not just
>> in a particular self-vacuous matter of building AI.
>
> I don't deny the possibility of disaster. But my stance is, if the only 
> approach you have to mitigate disaster is being able to control the AI 
> itself, well, the game is over before you even start it. It seems profoundly 
> naive to me that anyone could, even in principle, guarantee a 
> super-intelligent AI to "renormalize", in whatever sense that means. Then you 
> have the difference between theory and practice... just forget it. Why would 
> anyone want to gamble on that?

You may be interested in goedel machines. I think this roughly fits
the template that Eliezer is looking for, something that reliably self
modifies to be better.

http://www.idsia.ch/~juergen/goedelmachine.html

Although he doesn't like explicit utility functions, the provably
better is something he want. Although what you would accept as axioms
for the proofs upon which humanity fate rests I really don't know.

Personally I think strong self-modification is not going to be useful,
the very act of trying to understand the way the code for an
intelligence is assembled will change the way that some of that code
is assembled. That is I think that intelligences have to be weakly
self modifying, in the same way bits of the brain rewire themselves
locally and subconciously, so to, AI  will  need to have the same sort
of changes in order to keep up with humans. Computers at the moment
can do lots of things better that humans (logic, bayesian stats), but
are really lousy at adapting and managing themselves so the blind
spots of infallible computers are always exploited by slow and error
prone, but changeable, humans.

  Will Pearson


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to