Goals don't necessarily need to be complex or even explicitly defined.  One
"goal" might just be to minimise the difference between experiences (whether
real or simulated) and expectations.  In this way the system learns what a
normal state of being is, and detect deviations.



On 21/11/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:

Bob Mottram wrote:
>
>
> On 17/11/06, *Charles D Hixson* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     A system understands a situation that it encounters if it
predictably
>     acts in such a way as to maximize the probability of achieving it's
>     goals in that situation.
>
>
>
>
> I'd say a system "understands" a situation when its internal modeling
> of that situation closely approximates its main salient features, such
> that the difference between expectation and reality is minimised.
> What counts as salient depends upon goals.  So for example I could say
> that I "understand" how to drive, even if I don't have any detailed
> knowledge of the workings of a car.
>
> When young animals play they're generating and tuning their models,
> trying to bring them in line with observations and goals.
That sounds reasonable, but how are you determining the match of the
internal modeling to the "main salient features".  I propose that you do
this based on it's actions, and thus my definition.  I'll admit,
however, that this still leaves the problem of how to observe what it's
goals are, but I hypothesize that it will be much simpler to examine the
goals in the code than to examine the internal model.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to