On 2/3/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

My desire in this context is to show that, for agents that are
optimal or near-optimal at achieving the goal G under resource
restrictions R, the set of important implicit abstract expectations
associated with the agent (in goal-context G as assessed by an ideal
probabilistic observer) should come close to being consistent.

I believe your hypothesis is correct, and I agree that to prove it
will be taken as an academic achievement. However, personally I'm not
interested in it. Instead, my goal is to find a different sense of
"optimal" that is achievable by the agent even when it cannot maintain
consistent beliefs because of knowledge/resources restrictions.

I surely don't like inconsistency, but see it as inevitable in an AGI.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to