On Thu, 2002-10-31 at 11:24, Ben Goertzel wrote:
> 
> But, a connection topology that tends to have a lot of local connections is
> not the same as one that really maps a 2D or 3D space, with the precision
> desired for processing spatial input data...


For us, even though the accumulated data structure hangs together really
nicely in 3D (though not in 2D), the geometry is definitely not cubic
and any effective coordinate system would reflect this.

However, I would make the point that the "comfortable" geometry of the
network is sort of immaterial to how well it handles 2D and 3D spatial
processing.  Mapping n-dimensional spaces onto a one-dimensional space
is what computers do now, and effortlessly at that.

But that ignores another point worth mentioning, which is most sensory
data the human brain works with is one-dimensional even though we don't
think of it that way.  Audio, for example, is perceived as a
one-dimensional signal that is analyzed for spatial cues at a higher
level.  You don't need a 3D model of an audio space if you get the exact
same value by working with simple vectors discovered from a 1D data
stream.  I think there is something wrong with trying to expand a bit of
information far beyond its actual content in this context. In fact, if
you look at the format of the audio data that the brain actually
receives, "dimensionality" is a trivial piece of pseudo-meta-data
extracted from the stream.  What you end up with is multiple layers of
one-dimensional pattern data that effectively lets you exist in a 3D
space that isn't really implied by the data stream. A machine can behave
as though it is actually aware aurally in 3D even though its perception
of the world is strictly along a single axis with data structures to
match.  Or at least it will as long as there is value in learning
behaviors that treat certain vector patterns in certain ways.

Vision is more complicated, but even that is more like a 1.5-dimensional
data stream when you get down to it.  Definitely more difficult to
analyze though, and not my area of expertise.

The lack of true 3-dimensionality in any of our senses is the primary
reason it is so easy to fool those senses.  I think it is unnecessary to
fully map low-dimensionality data into a sophisticated and resource
consuming 3D space to be able to effectively behave as though you are
fully aware of your 3D surroundings, particularly since humans don't do
this in the sense I think is commonly believed.  The space is inferred
from a relatively small collection of vectors automatically being
stripped from a low-dimensionality data stream and simple processing
that happens on those vectors. Hearing, which I actually know quite a
bit about, does work like this pretty much in its entirety.
"Dimensionality" is a behavior learned as relatively simple vector
patterns.  Naturally, this gets more interesting when you throw in
multiple senses working together and add feedback.

It is also worth noting that the fact that dimensionality is learned is
also the reason it is hard to fake virtual 3D environments well for the
general population but easy to fake virtual 3D environments very
convincingly for a specific person; the vector patterns each person
learns are slightly different.  Once your particular dimensionality
perception profile is constructed, the software can make fake
environments that are indistinguishable from the real thing for that
person.  If you A/B the standard profile against your personalized
profile, the difference in perceived dimensionality is startling, even
though they are both nominally the same source material and processed to
provide 3D perception.


Cheers,

-James Rogers
 [EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to