Ben Goertzel wrote:

However, this doesn't solve the problem of finite resources making true probabilistic accuracy impossible, of course. AGI systems with finite resources will in fact not be ideally rational betting machines; they will not fully obey Cox's axioms; an ideal supermind would be able to defeat them via clever betting taking advantage of their weaknesses.

Any entity not logically omniscient can be trivially bilked by a logically omniscient bookie, because the non-logically-omniscient player assigns positive probabilities to events that are logically impossible.

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to