More simply even that that, Pei, when it comes across a task and a choice of 
options, if it sees no benefit > 5% (arbitrary setting or 0%)  does your system 
choose randomly between between the choices?

Doesnt this make the system non-deterministic...

Otherwise agree with your description.

James Ratcliff

Pei Wang <[EMAIL PROTECTED]> wrote: Mike,

I believe many of the confusions on this topic is caused by the
following "self-evident" belief: "A system is fundamentally either
deterministic or non-deterministic. The human mind, with free will, is
fundamentally non-deterministic; a conventional computer, being Turing
Machine, is fundamentally deterministic". Based on such a belief, many
people think AGI can only be realized by something that is
"non-deterministic by nature", whatever that means.

This belief, though works fine in some other context, is an
oversimplification in the AI/CogSci context. Here, as I said before,
whether a system is deterministic may not be taken as an intrinsic
nature of the system, but as depending on the description about it.

For example, NARS is indeed "nondeterministic" in the usual sense,
that is, after the system has obtained a complicated experience, it
will be practically impossible for either an observer or the system
itself to accurately predict how the system will handle a
user-provided task. On the other level of description, NARS is still a
deterministic Turing Machine, in the sense that its state change is
fully determined by its initial state and its experience, step by
step.

Now the important point is: when we say that the mind is
"nondeterministic", in what sense are we using the term? I believe it
is like "it will be practically impossible for either an observer or
the mind itself to accurately predict how the system will handle a
problem", rather than ""it will be theoretically impossible for an
observer to accurately predict how the system will handle a problem,
even if the observer has full information about the system's initial
state, processing mechanism, and detailed experience, as well as has
unlimited information processing power". Therefore, for all practical
considerations, including the ones you mentioned, NARS is
nondeterministic, since it doesn't process input tasks according to a
task-specific algorithm.

[If the above description still sounds confusing or contradictionary,
you'll have to read my relevant publications. I don't have the
intelligence to explain everything by email.]

Pei


On 5/6/07, Mike Tintner  wrote:
> Pei,
>
> Thanks for stating your position (which I simply didn't know about before -
> NARS just looked at a glance as if it MIGHT be nondeterministic).
>
> Basically, and very briefly, my position is that any AGI that is to deal
> with problematic decisions, where there is no right answer, will have to be
> freely, nondeterministically programmed to proceed on a trial and error
> basis - and that is just how human beings are programmed.
> (Nondeterministically programmed should not be simply equated with current
> kinds of programming - there are an infinity of possible ways of programming
> deterministically, ditto for nondeterministically).

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to