Ben Goertzel wrote:
Hi Richard,
...
What I mean by that is that the hypergraph idea is already locking down
many KR assumptions: the nodes are not open to multiple choices for
internal active structure, they interact with other nodes in one
particular choice of interaction space, relationships between nodes are
encoded with relatively simple probabilistic clusters that have direct,
high level semantics (IIRC), and so on. As far as flexible formats are
concerned, this is a thoroughly collapsed wave function. The remaining
flexibility is minimal.
My strong feeling is that the neural net structure in the brain is
ALSO locking down many KR assumptions.... I think you are vastly
overestimating the amount of flexibility present in the brain's
implicit approach to KR...
But, since none of us knows how the brain does KR, we can't really do
much besides opine here...
...
Ben
But a neural net is CAPABLE of representing a generalized n-dimensional
space. If you don't impose some limits, then learning either doesn't
happen, or happens quite slowly. However the constraints can be
probabilistic...and that will suffice. If the net can alter the
weights, then once it starts learning, it can adjust it's learning to
match the system being learned.
HOWEVER: All natural sensory systems are either 2+1 or 1+1 in (spatial)
dimensionality. (For this purpose I'm counting each ear and each eye
separately.) A result of this is that all learning happens by combining
2+1 or 1+1 dimensional inputs. Now that's spatial dimension. Color,
timbre, etc. are other non-spatial dimensions. Learning in humans in
intimately involved with techniques for combining these inputs...and
that IS done in an n-dimensional framework...probably not usually
spatial, though I know of no proof of this.
Thus: The KR system is NOT 4-D (i.e. 3+1). Only portions of it are.
Note that it's much easier to visualize in 2+1 dimensions, or even in
2+0 (a static image) and with color only used as an annotation on the
imagery. This requires less processing...and this implies that the KR
system "slims down" the inputs whenever feasible...but also that it has
some way of "inflating" the imagery when needed. This implies that MOST
learning happens in the "slimmed down" format (where processing is
relatively cheap). Symbols, of course, are an even slimmer form, but
one may doubt that any mapping from internal symbols to I/O has been
implemented in an efficient format. Large scale use of symbols is
evolutionarily speaking quite recent, and probably hasn't been
optimized. Besides...it's often valuable to issue misleading symbols,
so you can't really trust messages in that form that come from outside
(see camouflage).
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]