Thx for your response, Ben (and for the many other contributions on the
list!)

Re Hebbian neural – I assume you could calculate an eigenvalue matrix or
some other heuristic approximation (to matrix**n) to speed up
calculations. However, the matrix changes dynamically each time your AGI
learns. Also, the evidence is that the mind switches dynamically quite
easily between different ‘islands of stability’ so small changes in
weights or inputs are likely to produce quite different eigenvalue
values – if indeed it converges at all. Hence I’d venture to guess that
it may be computationally less expensive to iterate than to calculate a
reduced matrix each time. Despite this, personally I’d still prefer an
activation (not necessarily Hebbian) spreading network (tho you have
some of that in your Novamente architecture as well – for your patterns)
especially for the ‘middle level’ (for my top-level I also favour a
purely symbolic though much less formal one than Novamente/NARS/Cyc
approach, mainly because I’m not smart and mathematically skilled enough
:)  Also I think it’s better for different people to try out different
approaches so as to explore the AGI solution space a bit wider.

PS Current theorem-proving approaches I always considered to be
narrow-AI alternatively one of many specialized modules in an AGI tho
obviously a computer-AGI c/would be much more efficient at
theorem-proving than humans. Tho maths, being abstract, would indeed be
one of the areas in which any computer AGI should excel (it should be
one of her main hobbies:)



>>> "Benjamin Goertzel" <[EMAIL PROTECTED]> 06/03/07 3:02 PM >>>
> of the 3 different AGI approaches you entertained, you went
> with Novamente instead of the Hebbian neural net (and the theorem
proving
> one)... us scruffies would like to know... is it just your
mathematical
> bias/background or something more fundamental?
The Hebbian neural net approach seemed like it would be dramatically
more
computationally expensive, requiring a whole bundle of synapses to do
what
we can do with a single Novamente link.  I.e., it's less natural for the
von
Neumann infrastructure we are stuck with at the moment.  And, once you
get
beyond simple stuff, we don't know how the brain works so we need to
invent
stuff anyway, even in that plan (e.g. I have a scheme for doing
higher-order
logic in neural nets that involves feeding a dimensionally-reduced
version
of a neural net's connection matrix to the same network as an input
vector
... but tuning that would take a lot of work, and there is no
neuroscience
to guide such work, at this point...)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to