What is harder is as follows: if goals G1 and G2 look to be related, and
the system has learned a bunch of ways to fulfill G1; then, this latter fact
should make it easier for the system to find ways to fulfill G2 (than if it
hadn't learned ways to fulfill G1 already).  This is known as "transfer
learning" and has proved more challenging for AI systems than simply
generating diverse plans for the same goal.

Because you can generate diverse plans without arriving at a deep
understanding of the goal and the space within which it is situated; but
transfer learning, except in lucky cases, requires real insight...

Ben G




And, this is one of our internal, intermediate intelligence tests for
Novamente: for simple, related  goals G1 and G2, see how well its transfer
learning capabilities work...

This is a topic currently under discussion on NM's internal email list, for
example...

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to