Ben,

Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while in AI the former will occur within a mostly fixed
hardware system. Also, an AI system doesn't have to first develop
capabilities responsible for the survival of a human baby.

As a result, for example, Novamente can do some abstract inference (a
formal stage activity) before being able to recognize complicated
patterns (an infantile stage activity).

Of course, certain general principles of education will remain, such
as "to teach simple topics before difficult ones", "to combine
lectures with questions and exercises", "to explain abstract materials
with concrete examples", and so on, but I don't think we can get too
much details with confidence.

As for AIXI, since its input comes from a finite "perception space"
and a real-number "reward space", its output is selected from a fixed
"action space", and for a given history (past input and output) there
is a fixed (though unknown) probability for each possible input to
occur, the best training strategy will be very different from the case
of Novamente, which is not based on such assumptions.

Given the different research goals and assumptions about the
interaction between the system and the environment, different AGI
systems will have very different training/educating strategies, which
are similar to each other only in a very vague sense. Furthermore,
since all the systems are far from mature, any design change will
require corresponding change in training. On the contrary, we cannot
decide a training process first, then design the system accordingly.
For these reasons, I'd rather not to spend too much time on training
now, though I fully agree that it will become a major issue in the
future.

Pei


On 7/13/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Pei,

That is actually not correct...

I would teach a baby AIXI about the same way I would teach a baby
Novamente, but I assume the former would learn a lot faster... so the
various stages of instruction would be passed through a lot more
quickly....

Furthermore, I expect that the same cognitive structures that would
develop within a Novamente during its learning process, would also
develop within an AIXI during its learning process -- though in the
AIXI these cognitive structures would exist within the "currently
active program" being used to choose behaviors (due to its being
chosen as optimal during AIXI's program space search).

Please note that both AIXI and Novamente are explicitly based on
uncertain probabilistic inference, so that in spite of the significant
differences between the two (e.g. the latter can run on feasible
computational infrastructure, and is much more complicated due to the
need to fulfill this requirement), there is also a significant
commonality.

-- Ben

On 7/13/06, Pei Wang <[EMAIL PROTECTED]> wrote:
> Ben,
>
> For example, I guess most of your ideas about how to train Novamente
> cannot be applied to AIXI.  ;-)
>
> Pei
>
> > Pei,
> >
> > I think you are right that the process of education and mental
> > development is going to be different for different types of AGI
> > systems.
> >
> > However, I don't think it has to be dramatically different for each
> > very specific AGI design.  And I don't think one has to wait till one
> > has a working AGI to put serious analysis into its psychological
> > development and instruction.
> >
> > In the context of Novamente, I have put a lot of thought into how
> > mental development should occur for AGI systems that are
> >
> > -- heavily based on uncertain inference
> > -- embodied in a real or simulated world where they get to interact
> > with other agents
> >
> > Novamente falls into this category, but so do other AGI designs.
> >
> > A few of my and Stephan Bugaj's thoughts on this are described here:
> >
> > http://www.agiri.org/forum/index.php?showtopic=158
> >
> > and here:
> >
> > http://www.novamente.net/engine/
> >
> > (see "Stage of Cognitive Development...")
> >
> > I have a whole lot of informal notes written down on AGI Developmental
> > Psychology, extending the general ideas in this presentation/paper,
> > and will probably write them up as a manuscript one day...
> >
> > -- Ben
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate your 
subscription,
> > please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to