Matt Mahoney wrote:
I don't think there is a simple answer to this problem. We observe very
complex behavior in much simpler organisms that lack long term memory or the
ability to learn. For example, bees are born knowing how to fly, build hives,
gather food, and communicate its location.
Indeed, and we observe complex behaviors in turbulent fluid flow,
plasmas, and other nonliving self-organizing systems as well....
But I don't see any of this as terribly relevant to the question I was
asking ;-)
Bees are born knowing how to build hives, but are children born knowing
how to build houses? I have a feeling a human's cognitive architecture
and dynamics are quite different from those of a bee...
The complexity of inductive bias is bounded by the complexity of your DNA,
about 6 x 10^9 bits. This is probably too high by a few orders of magnitude,
just as the number of synapses overestimates the complexity of AGI.
Nevertheless, we risk repeating the error of GOFAI. Early AI researchers were
led astray by the successes of explicitly coding knowledge into toy systems.
Now we know to use statistical and machine learning techniques, but we may
still be led astray by oversimplified models of inductive bias. Certain
aspects of the cerebral cortex are highly uniform, which suggests a simple
model. But the rest of the brain has a complex structure that is poorly
understood.
I'm not thinking that a systematic list of known human inductive biases
could be derived from genetics neuroscience (in the near term), but
rather from cognitive psychology.
And, I'm not thinking to use such a list as the basis for creating an
AGI, but simply as a tool for assisting in thinking about an
already-existing AGI design that was created based on other principles.
My suspicion is that all the known and powerful human inductive biases
are already built into Novamente in various ways, so that comparing such
a list against the Novamente design would be helpful to people (team
members and advisors) in understanding the design itself.
Also btw Novamente is not a pure statistical/machine-learning system.
It does not consist of a statistical learning algorithm combined with an
explicit prior distribution encompassing inductive biases. It can be
interpreted that way, but that is not the most simple or direct way of
interpreting the design. The learning algorithm and the biases are
complex and mixed-up in the design -- an aspect which, on a very high
level, has got to be qualitatively similar to the brain.
AGI might still be harder than we think. It has happened before.
Of course it might be -- we can't know for sure till the task of
building an AGI is successfully done. But I see no reason to operate
under that pessimistic assumption. It looks to me like AGI is
achievable given current technology and knowledge; and in hindsight, I
think that future scientists will look back at us and think we were
total idiots for taking too long to do something so relatively
simple.... But IMO we do need to resist our urge for over-simple
solutions like "it's just reasoning" or "it's just statistical learning"
and embrace the necessity for a large-scale, integrative solution (which
is what the brain is).
-- Ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303