On 09/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
 Likewise, an artificial general
intelligence is not "a set of environment states S, a set of actions A,
and a set of scalar "rewards" in the Reals".)

Watching history repeat itself is pretty damned annoying.


While I would agree with you the set of environmental states and
actions are not well defined for anything we would call intelligence.
I would argue the concept of rewards, probably not Reals, does have a
place in understanding intelligence.

It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles). But there is a necessity for a simple explanation for how
simple chemicals can lead to the alteration of complex goals. How and
why do we get addicted? What is it about morphine that allows the
alteration of a brain to one that wants more morphine, when the desire
for morphine didn't previously exist?

That would be like bit flipping a piece of code or variable in an AI
and then the AI deciding that bit-flipping that code was somehow good
and should be sort after.

The RL answer would be that the reward was variable altered.

If your model of motivation can explain that sort of change, I would
be interested to know more. Otherwise I have to stick with the best
models I know.

Will

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to