Richard Loosemore wrote:
Ben Goertzel wrote:
It's pretty clear that humans don't run FOPC as a native code, but
that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns
is essentially equivalent to basic probabilistic term logic.
Lower-level common-sense inferencing of the Clyde-->elephant-->gray
type falls out of the representations and the associative operations.
I think it falls out of the logic of spike timing dependent long term
potentiation of bundles of synapses between
cortical columns...
The original suggestion was (IIANM) that humans don't run FOPC as a
native code <em>at the level of symbols and concepts</em> (i.e. the
concept-stuff that we humans can talk about because we have
introspective access at that level of our systems).
Now, if you are going to claim that spike-timing-dependent LTP between
columns is where some probabilistic term logic is happening ON
SYMBOLS, then what you have to do is buy into a story about where
symbols are represented and how. I am not clear about whether you are
suggesting that the symbols are represented at:
(1) the column level, or
(2) the neuron level, or
(3) the dendritic branch level, or
(4) the synapse level, or (perhaps)
(5) the spike-train level (i.e. spike trains encode symbol patterns).
If you think that the logical machinery is visible, can you say which
of these levels is the one where you see it?
None of the above -- at least not exactly. I think that symbols are
probably represented, in the brain, as dynamical patterns in the
neuronal network. Not "strange attractors" exactly -- more like
"strange transients", which behave like strange attractors but only for
a certain period of time (possibly related to Mikhail Zak's "terminal
attractors"). However, I think that in some cases an individual column
(or more rarely, an individual neuron) can play a key role in one of
these symbol-embodying strange-transients.
So, for example, suppose Columns C1, C2, C3 are closely associated with
symbol-embodying strange transients T1, T2, T3.
Suppose there are "highly conductive" synaptic bundles going in the
directions
C1 --> C2
C2 --> C3
Then, Hebbian learning may result in the potentiation of the synaptic
bundle going
C1 --> C3
Now, we may analyze the relationships between the strange transients T1,
T2, T3 using Markov chains, where a high-weight "link" between T1 and
T2, for example, means that P(T2|T1) is large.
Then, the above Hebbian learning example will lead to the heuristic
inference
P(T2 | T1) is large
P(T3 | T2) is large
|-
P(T3 | T1) is large
But this is probabilistic term logic deduction (and comes with specific
quantitative formulas that I am not giving here).
One can make similar analyses for other probabilistic logic rules.
Basically, one can ground probabilistic inference on Markov
probabilities between strange-transients of the neural network, in
Hebbian learning on synaptic bundles between cortical columns.
And that is (in very sketchy form, obviously) part of my hypothesis
about how the brain may ground symbolic logic in neurodynamics.
The subtler part of my hypothesis attempts to explain how higher-order
functions and quantified logical relationships may be grounded in
neurodynamics. But I don't really want to post that on a list before
publishing it formally in a scientific journal, as it's a "bigger" and
also more complex idea.
This is not how Novamente works -- Novamente is not a neural net
architecture. However, Novamente does include some similar ideas. In
Novamente lingo, the "strange transients" mentioned above are called
"maps", and the role of the Hebbian learning mentioned above is played
in NM by explicit probabilistic term logic.
So, according to my view,
In the brain: lower-level Hebbian learning on bundles of links btw
neuronal clusters, leads to implicit probabilistic inference on
strange-transients representing concepts
In Novamente: explicit heuristic/probabilistic inference on links btw
nodes in NM's hypergraph datastructure, lead to implicit probabilistic
inference on strange-transients ("called maps") representing concepts
So, the Novamente approach seeks to retain the
creativity/fluidity-supportive emergence of the brain's approach, while
still utilizing a form of probabilistic logic rather than neuron
emulations on the lower level. This subtlety causes many people to
misunderstand the Novamente architecture, because they only think about
the lower level rather than the emergent, "map" level. In terms of our
practical Novamente work we have not done much with the map level yet,
but we know this is going to be the crux of the system's AGI capability.
-- Ben
As I see it, ALL of these choices have their problems. In other
words, if the machinery of logical reasoning is actually visible to
you in the naked hardware at any of these levels, I reckon that you
must then commit to some description of how symbols are implemented,
and I think all of them look like bad news.
THAT is why, each time the subject is mentioned, I pull a
sucking-on-lemons face and start bad-mouthing the neuroscientists. ;-)
I don't mind there being some logic-equivalent machinery down there,
but I think it would be strictly sub-cognitive, and not relevant to
normal human reasoning at all ...... and what I find frustrating is
that (some of) the people who talk about it seem to think that they
only have to find *something* in the neural hardware that can be
mapped onto *something* like symbol-manipulation/logical reasoning,
and they think they are half way home and dry, without stopping to
consider the other implications of the symbols being encoded at that
hardware-dependent level. I haven't seen any neuroscientists who talk
that way show any indication that they have a clue that there are even
problems with it, let alone that they have good answers to those
problems.
In other words, I don't think I buy it.
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303