Hi,

> Baum's algorithm is very carefully worked out, but the
> reinforcement values it learns from are simple. And a
> successful reinforcement learning algorithm is one that
> can work from any reinforcement values in any situation.

Yeah, I don't think that Baum's algorithm need special reinforcement values.

However, it is VERY slow and seems to require very careful parameter tuning.

As Moshe pointed out to me, Marcus Hutter and his students tried to
replicate Baum's work, with mixed results:

go to

http://www.idsia.ch/~marcus/

click on "Artificial Intelligence" and scroll down to

Market-Based Reinforcement Learning in Partially Observable Worlds (with I.
Kwee & J. Schmidhuber)
Proceedings of the 11th International Conference on Artificial Neural
Networks (ICANN-2001) 865-873


> Despite the wonderful work of Eric Baum and others,
> developing really robust reinforcement learning is a
> really hard challenge. Which is why my estimate for
> the arrival of SI is 2100 rather than 2010 or 2020.
> I hope I'm wrong, because I want to meet a SI.


Bill, I think that *thinking about* the AGI problem as a problem of
"developing really robust reinforcement learning" is CORRECT but
UNPRODUCTIVE.  I think that if you think about the problem as one of
creating an integrated mind-system, and build the integrated mind-system,
you will find that the robust reinforcement learning comes along due to
coordinated emergent behaviors of various components.

So ultimately, I don't think that ultra-clever pure-reinforcement-learning
schemes like Baum's are the road to AGI, although they may play a role.

It wouldn't be the first time in the history of science that a problem
looked close-to-impossible from one perspective, but became manageable via a
perspective-shift.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to