On 2/3/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

However, the question of how much inconsistency is inevitable in an
AGI is an interesting one.

I don't think an AGI system will try to answer this question. Instead,
it will try its best to resolve inconsistency, no matter how much it
runs into. Of course, it won't be always successful in doing so.

It implies that, in AGI design, it is worthwhile to focus on making
one's AGI consistent.

Fully agree . However, "to try to make the beliefs as consistent as
possible" is one thing (which NARS also does), but "to require a
consistent or near-consistent belief system to start with" is another
thing (which I think is unrealistic, though it would be nice to have).

This is the reason why I have chosen a fairly complex definition of
consistency that has to do with what an observer could infer from the
system's behaviors, rather than a definition that assumes the system
is explicitly doing formal logic internally.

Again, to take consistency as an ultimate goal (which is never fully
achievable) and as a precondition (even an approximate one) are two
very different positions. I hope you are not suggesting the latter ---
at least your posting makes me feel that way.

Pei

-- Ben



On Feb 3, 2007, at 8:47 PM, Pei Wang wrote:

> On 2/3/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>
>> My desire in this context is to show that, for agents that are
>> optimal or near-optimal at achieving the goal G under resource
>> restrictions R, the set of important implicit abstract expectations
>> associated with the agent (in goal-context G as assessed by an ideal
>> probabilistic observer) should come close to being consistent.
>
> I believe your hypothesis is correct, and I agree that to prove it
> will be taken as an academic achievement. However, personally I'm not
> interested in it. Instead, my goal is to find a different sense of
> "optimal" that is achievable by the agent even when it cannot maintain
> consistent beliefs because of knowledge/resources restrictions.
>
> I surely don't like inconsistency, but see it as inevitable in an AGI.
>
> Pei
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to