Hi,

> In my book I say that consciousness is part of the way
> the brain implements reinforcement learning, and I think
> something like that is necessary for a really robust
> solution. That's why I think it will take 100 years.

I would say, rather, that consciousness and reinforcement learning are BOTH
consequences of having the right kind of integrative AGI system...

> > So ultimately, I don't think that ultra-clever
> pure-reinforcement-learning
> > schemes like Baum's are the road to AGI, although they may play a role.
> >
> > It wouldn't be the first time in the history of science that a problem
> > looked close-to-impossible from one perspective, but became
> manageable via a
> > perspective-shift.
>
> I hope that when I say something will take 100 years,
> that indicates that I think it is not straightforward
> and will require a number of major conceptual leaps.

Yeah, of course.  But once one of the conceptual leaps has been made, then a
new time-estimate will become appropriate.  A conceptual leap can shrink or
expand a time estimate by orders of magnitude ;)

Whether we've actually made the first critical conceptual leap in the
Novamente design, remains to be seen, of course...

Interestingly, one of the funkiest aspects of the Novamente design relates
directly to the limitations of the AIXItl design we've been talking about.

Novamente represents knowledge on two levels -- Atoms (nodes and links) and
maps (patterns of node/link activation).  But there is a process that
creates Atoms representing maps, by studying the system's mind as a whole
and recognizing pattersn in it... these patterns are maps and can be linked
to specific Atoms.  This kind of self-study is critical to Novamente's
mind-process (as hypothesized, this is not implemented yet), and as Eliezer
points out, is a kind of dynamic that AIXItl and similar formal systems are
not capable of.

This kind of constant feedback btw the explicit knowledge layer, and the
emergent knowledge layer, is a big example of something that I believe is
tremendously helpful for reinforcement learning, but does not on the surface
look like reinforcement learning.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to