--- Robert Wensman <[EMAIL PROTECTED]> wrote:
> 1. On the most abstract level, there are objects who are connected to each
> other through data-streams. An object can only influence another object,
> if they are connected by some data-stream. The purpose is to make it easy to
> reason about isolation and connection. If the AGI system can isolate an
> object from other objects, the idea is that it can be studied without
> involving the complexity of the rest of the world.
> 
> 2. We can also introduce the bias, that complex objects might consist out of
> sub objects, or that there exists interface objects that stand as a barrier
> between two groups of objects.

These are fundamental properties of complex systems that evolve through
incremental update.  They lie on the boundary between stability and chaos,
i.e. they have the discrete equivalent of a Lyapunov exponent of 0.  These
systems have the property that most updates result in small changes in
behavior, but occasionally result in catastrophic changes.  This is achieved
by restricting the internal interconnectivity in a hierarchical manner.

Stuart Kauffman [1] studied systems of this type, such as evolution, software,
and gene regulatory networks in humans.  He found that sets of randomly
connected logic gates (with feedback) transition from stable to chaotic as the
average number of connections per gate increases from 2 to 3.  At the
transition, the number of attractors in an n-gate network is about sqrt(n). 
Likewise, the number of cell types in the human body is about the square root
of the number of genes.

A characteristic of descriptions of such systems or their outputs is
nonstationary behavior such that autocorrelation is inversely related to time.
 This is also a characteristic of models of most good adaptive data
compressors (an AI-complete problem) as well as learning in animals.  In both
classical and operant conditioning, the learning rate is inversely
proportional to the time delay between the conditioned and unconditioned
stimulus, or between the action and reinforcement.

So I would say that most AI systems already have this bias, or should have.

1. Kauffman, Stuart A., "Antichaos and Adaptation", Scientific American, Aug.
1991, p. 64.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to