On 26/09/2014 18:37, Matt Mahoney via AGI wrote:

> But we won't build AGI that way when we can build
> something more complex that requires less computation.
> IBM did not build Watson as a general purpose learner,
> for example, a genetic algorithm with Jeopardy scores
> as a fitness function. Rather, it was a complex program
> (30 person-year effort) without an explicit goal,
> running on a few thousand processors. Since the cost of
> knowledge and computing power both scale nearly linearly,
> the formula for intelligence suggests you want to spend
> roughly equal amounts on each, which is what they did.
> Likewise, the most intelligent computing system in the
> world, the internet, does not have a goal either.
> Reinforcement learning is slow because the information
> content of the signal is low. It is much faster to
> teach a system using words.

Note that the same argument that I gave about evolution
applies to reinforcement learning as well. The idea that
reinforcement learning is necessarily slow - due to the
low information content of the reward signal - is wrong.
RL agents receive other sensory input besides their
reward signal - and they can make full use of that -
if it pays for them to do so.

That's part of the reason why utility-based modelling is
so popular. It isn't limited in the way that you describe.
Indeed, it seems kind of patronising to me for you to think
that so many others are using such bad models.

Objections to utility-based modelling and goal-directedness
seem widespread - and nearly all of the objections seem to
be poorly thought through to me.
--
__________
 |im |yler  http://timtyler.org/  [email protected]  Remove lock to reply.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to