Here is an Idea I have been pondering for a while:

One of my primary ideas of how to deal with truley hard adaptation problems,
such as AGI learning has allways been "parallellization of adaptation"  (the
other idea I like to talk about is meta-adaptation). Anyhow,
parallellization of adaptation means that if we are to learn a certain
system, it can significantly reduce the complexity if we can somehow learn
the parts of the system individually. For example, if the complexity of a
system is in the order of X bits, then it would take in the order of 2^X
resources to learn the system. But if instead, the system can be split into
two parts that can be understood individually, the complexity is reduced
drastically to 2^(X/2 + 1). If the AGI system can separate individual parts
of the system even further, learning rate can increase drastically.

Without any kind of biases about the structure of the world, AGI learning
seems like a nightmare. Basically we have an AGI machine that needs to learn
and to some extent influence the workings of a black-box system of arbitrary
complexity. So when we build AGI, I think it is necessary that we inject
into it some basic biases, concerning the nature of this world black box. We
do not want to give the AGI system any firm assumptions about how the world
works, as that would be too narrow AI:ish. But the biases we give to the AGI
system might influence what assumptions it will try out first. Also, my
point is that we ought to give the AGI system biases so that it can
parallellize adaptation as much as possible.

I think that the most gentle way of giving an AGI system biases, is to give
it a metaphysics framework for understanding the world. The nice thing about
meta physics, is that they really do not limit what can be expressed, they
just give guideance. For example, a trained human mathematicians can deal
with 5 dimensional spaces, yet it is extremley difficoult to work with 5
dimensions on an intuitive level, so it seems that our brian is heavily
biased towards 3 dimensions. If we are serious about building an AGI that
could operate on limited hardware, I think we need to consider things like
that.

So, the purpose of this email is to try to outline a metaphysics system that
could benefit parallelization of adaptation. Here are a number of
metaphysical principles that might be useful for this:

1. On the most abstract level, there are objects who are connected to each
other through data-streams. An object can only influence another object,
if they are connected by some data-stream. The purpose is to make it easy to
reason about isolation and connection. If the AGI system can isolate an
object from other objects, the idea is that it can be studied without
involving the complexity of the rest of the world.

2. We can also introduce the bias, that complex objects might consist out of
sub objects, or that there exists interface objects that stand as a barrier
between two groups of objects.

3. According to our current understanding of our world, there are categories
of objects who have similar properties. On a astronomical level, the same
physical laws give rise to groups of similar objects, such as stars, planets
etc. On a geological level universial rules create large sets of objects
with similar properties, lakes, islands etc. On a biological level,
replication mecanisms give an even more direct reason for why there are a
lot of objects with similar properties. So this might suggest that an AGI
system could benefit from biases that assume that objects that appear the
same, could work in the same way. To identify the apperance of an object is
easier than to identify the whole object.

4. The metaphysics could be more specific about spatiality. We can define
that each object has a volume in space, and can only connect with another
object if both objects share the same surface.

5. It is also very common for objects to have different density. The
difference between solid objects and space objects is that other objects can
move through space objects. Space objects can also relay data-streams
between objects that reside in the same space. For example, data that is
carried through visual or audio signals.

6. The agi system can model itself as one of these objects.

Maybe this kind of minimalistic spatial metaphysics could then boost the
learning of an AGI system in the following way:

1. Model the current situation in terms of objects, their boundries and the
data-flows between objects. The fact that certain objects belong to certain
classes that can be identified by apperance, can be used to more rapidly and
accuratley build this model. Also, assuming that the AGI system is located
inside a space object of certain properties might help to identify and
separate objects.

2. Try to actively separate objects of interest to be able to study them
without interference. Whith this I mean that the AGI system should try to
obtain awareness of all information-flows going in or coming out from a
particular object.

Since these metaphysics are based on 3D space, they can be easily modified
to be suitable for 2D space. Maybe it could be possible to build AGI
prototypes using 2D space biases to lessen the demand for hardware, and then
when we have gained more experience, it could be possible to shift to full
3D space metaphysics based on the experience from 2D AGI.

So, my questions now are:

Has anyone else had similar ideas about what biases/metaphysics should be
encoded into an AGI system. What could be good with them, bad with them?
Does anyone agree with the fact that object isolation could be an important
principle for achieving AGI learning?

Also, some specific questions for Ben Goertzel:
I understand Novamente is based on patternist metaphysics. In what ways is
patternist metaphysics different/similar to the metaphysics sketched at
here? As I understand it, the patternist metaphysics is based on events.
Would it be possible/easy to model data-flow dependencies between objects
using the Novamente metaphysics?

Also, I remember once seeing a Novamente demonstration where the AGI system
was learning the concept of "object persistancy". The fact that objects
hidden, remain in hiding until next time it is shown again (I hope this give
a correct description of what was shown). But in that case I guess that
there must have been some initial concepts encoded already into the AGI
system, for example the concepts of how dense objects can move through space
objects. Using informal words, how would you describe the metaphysics or
biases currently encoded into the Novamente system?

/Robert Wensman

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to