|
Hi,
Because we use a lot of evolutionary learning methods, it will work more
like:
A
whole populatoin of Novamentes (10 or so for starters, later perhaps much more)
repeatedly try out new MindAgents (cognitive-control objects) on some
test-cognitive-problems and see how well it does. Another Novamente, the
controller, studies which of the new MindAgents work well, and mines patterns
among these, creating new MindAgents to try out....
So
there is no human in the learning loop....
Furthermore, for a human to understand the intricate details of a learned
procedure (e.g. an automatically learned MindAgent) may be very
hard.... Just as understanding the details of our own adaptively learned
neural wiring is very hard....
--
Ben
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- RE: [agi] Teaching AI's to self-modify Ben Goertzel
- Re: [agi] Teaching AI's to self-modify Ben Goertzel
- RE: [agi] Teaching AI's to self-modify Ophir Shai
- RE: [agi] Teaching AI's to self-modify Ben Goertzel
