YKY (Yan King Yin) wrote:
Hi John,
Re your idea that there should be an "intermediate-level" representation:
1. Obviously, we do not currently know how the brain stores that
representation. Things get insanely complex as neuroscientists
go higher up the visual pathways from the primary visual cortex.
2. I advocate using a symbolic / logical representation for the 3D
(in fact, 4D) space. There might be some misunderstanding here
because we tend to think the sensory 4D space is *sub*symbolic. This
is actually just a matter of terminology. For example, if "block A is
on top of block B" then I may put a symbolic link labeled as
"is_on_top_off" between the 2 nodes representing A and B. Is such a
link symbolic or subsymbolic? Nodes and links such as "John" "loves"
"Mary" are clearly symbolic because they correspond to
natural-language words. But in a logical representation there can be
many nodes/links that does NOT map directly to words.
The point here is that a logical representation is *sufficient* to
model a physical word facsimile. If you disagree this, can you give
an example of something that cannot be represented in the logical way?
Yes, of course it's sufficient in principle, but it's not adequately
efficient!
To accurately represent a physical scene in all its details, using
explicit formal logic, will occupy a huge amount of memory; and even
more critically, it will render a lot of useful inferences about
physical objects extremely inefficient...
2. To help you better understand the issue here, notice that a
fine-grained representation would eventually need to become
coarse-grained -- information must be lost along the way, otherwise
there would be memory shortage within hours of sensory perception.
The logical representation is precisely such a coarse-grained one.
Technically, as you go to the finer resolutions in the logical
representation, the elements get a more "subsymbolic" flavor.
3. Can you name certain features of your representation that is
different from a logical one?
In the case of Novamente, here is one example: a recognizer for "chairs"
(in the sense of the pieces of furniture that we often sit on).
A Novamente system contains logical knowledge about chairs, but also
contains "little programs" that evaluate collections of percepts and
decide if such a collection shows a chair or not.
These programs may combine arithmetic and logic operations, and will
generally be learned via evolutionary or greedy algorithms not by
logical reasoning.
This example highlights one important point: logic is often very
inefficient at handling QUANTITATIVE information. Of course it can do
so -- after all, calculus and such can ultimately be formalized fully in
terms of mathematical logic; but these formalisms are cumbersome and are
not what you use to actually to calculus....
And, perception and action have a lot to do with managing large masses
of quantitative information.
IMO, a key aspect of AGI is having effective means for the
interoperation of logical and nonlogical knowledge.
In the brain, I believe, logical inference and nonlogical pattern
recognition are achieved via different connectivity patterns: both
logical reasoning and nonlogical pattern recognition are carried out via
the same long-term potentiation and activation spreading dynamics, but
-- logic has to do with coordinated potentiation of bundles of synapses
btw cortical columns
-- nonlogical pattern recognition has more to do with hierarchical
dynamics, as outlined by Mountcastle, Hawkins and many others
In Novamente, the logic module is in principle able to intake and reason
about pattern recognized nonlogically (e.g. using the laws of algebra to
reason about quantitative patterns), but, this is not always a useful
expenditure of resources...
-- Ben G
-- Ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303