HI,
Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while in AI the former will occur within a mostly fixed
hardware system. Also, an AI system doesn't have to first develop
capabilities responsible for the survival of a human baby.
I agree that Piagetan psychology doesn't map exactly to AGI, but I
think it can be used as one of the central inspirations for the
creation of an AGI developmental psychology
As a result, for example, Novamente can do some abstract inference (a
formal stage activity) before being able to recognize complicated
patterns (an infantile stage activity).
This is a subtle point which we addressed in our paper on AGI dev
psych, that I referenced in my previous post.
While Novamente can in principle do some advanced abstract inference
at an early stage of development, a more important point is that at
this early stage it cannot **flexibly and adaptively control**
advanced abstract inference.
If you define "having a type of inference" as "being able to
adaptively and flexibly control this type of inference", then you find
that an AGI like Novamente may well progress through roughly the same
stages as human infants, in terms of mastering different kinds of
inference.
Of course, certain general principles of education will remain, such
as "to teach simple topics before difficult ones", "to combine
lectures with questions and exercises", "to explain abstract materials
with concrete examples", and so on, but I don't think we can get too
much details with confidence.
I don't agree... I think there are relatively universal rules of
cognitive development that will apply to a broad spectrum of
developing intelligences.
Human dev. psych. has often not been formulated with this kind of
generalizability in mind. But if one seeks to transform dev. psych.
principles into more general principles, one can often come up with
some nice and apparently plausible conclusions. Stephan and I tried
to take a step in this direction in our paper.
As for AIXI, since its input comes from a finite "perception space"
and a real-number "reward space", its output is selected from a fixed
"action space", and for a given history (past input and output) there
is a fixed (though unknown) probability for each possible input to
occur, the best training strategy will be very different from the case
of Novamente, which is not based on such assumptions.
Well, I don't want to pursue AIXI developmental psychology too far --
I haven't thought about it deeply and don't have time to do so right
now. Clearly it would be very different from the dev. psych. of a
realistically-bounded-resources AGI system.
However I think that any AGI system founded on heuristic uncertain
inference is going to obey a certain set of principles regarding
cognitive development. This might well include AIXItl for
sufficiently small t and l, but I haven't thought about this very hard
to it might not be true.
Furthermore,
since all the systems are far from mature, any design change will
require corresponding change in training.
This is an exaggeration, IMO. In the case of Novamente, I find that
design changes in the system tend NOT to necessitate corresponding
changes in training. Rather, I think the logic of cognitive
development poses a set of constraints on AGI design, and the design
may be changed radically within these constraints without changing the
developmental and teaching approach.
On the contrary, we cannot
decide a training process first, then design the system accordingly.
Within a general AGI approach, one can decide a training process, and
then figure out how to specify a particular AGI approach within the
general AGI approach using the training process as one among many
sources of guidance.
-- Ben
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]