I read the book a couple years ago and wrote this review of it:
http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm
Last year I participated in a discussion group at the NIH, which was
focused on Hawkins' ideas and involved a number of neuroscientists.
Their consensus was sorta like what Shane Legg said in his recent post
to this list: Very little of Hawkins' neuroscience ideas are actually
new, basically all of it is stuff that is either known by everyone in
the field or strongly suspected by many in the field.
As for the actual algorithm used in Numenta's software, as Durk says,
it's a nice approach to computational visual perception, but the basic
design is pretty simplistic and the current implementation even more so
... and the idea that this simple little hierarchical visual-perception
algorithm somehow encompasses the key to human intelligence seems rather
overblown to me.
Sure, you could argue that the **principles** involved in this algorithm
are key to intelligence overall --- principles like hierarchical pattern
recognition, interplay btw temporal and spatial pattern recognition,
probabilistic learning, etc. But the way these principles are used in
the current Numenta software design is certainly not adequate to explain
language learning, motor learning, creative invention, etc. etc. etc. --
not even if the code and design are generalized considerably ... this
Bayes-net-meets-pyramidal-classifier-net approach is just not that
generalizable, not that broad in its implications...
There is a leap made in Hawkins book, from
A) intelligence is centered on memory and prediction
to
B) a particular sort of memory and prediction algorithm/architecture,
which is at most a decent model of visual processing (not general
cognition/perception/action)
and this leap is not very well justified and sweeps 95% of the
complexity of human and machine cognition under the rug...
So, I think it's interesting work that has been very clearly presented,
but I don't think Hawkins has yet met the challenge of creating an AGI
design that manifests his high-level philosophy of mind
(memory/prediction) in a sufficiently general yet computationally
tractable way.
History shows that narrow AI is a lot easier, and making a nice vision
processing system inspired by philosophical and neural principles, is
still squarely in the category of narrow AI.
-- Ben G
Kingma, D.P. wrote:
Dear Aki Iskandar
My 2 cents.
I haven't read the book "On Intelligence", but from the book flowed a
"proof-of-concept" program which I analyzed thoroughly. I read the
source of the program, analyzed the Dileep George and Hawkin's papers
etc.
The idea of the proof-of-concept is basically a combination of a
Bayesian networks and a pyramid classifier network. Nodes in the
pyramid are extremely simplistic and store co-occuring temporal and
spatial patterns. Once the pyramid completed learning, it is
'transformed' into a Bayesian belief-propagation network using
conditional probability matrices.
Although it is a nice idea, it is nothing new, and there are quite a
few limitations of this design. First of all, the network layout is
fixed by design. Second, the type of learned pattern's is only of the
"co-occurance" type. There are more limitations. They pretend that
co-occurance is the only building block a system needs to build a
model of its environment. I don't believe thats true. What they are
forgetting is that the brain has processes that operate on the
patterns themselves on a deeper level. Things like thought, dreams
etc. have an important function on how we organize the model in our
head. What would be interested in a complete model that not only sais
how we obtain patterns, but also how the patterns self-organize to be
usefull.
Durk Kingma
On 2/21/07, *Aki Iskandar* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
I'd be interested in getting some feedback on the book "On
Intelligence" (author: Jeff Hawkins).
It is very well written - geared for the general masses of course -
so it's not written like a research paper, although it has the feel
of a thesis.
The basic premise of the book, if I can even attempt to summarize it
in two statements (I wouldn't be doing it justice though) is:
1 - Intelligence is the ability to make predictions on memory.
2 - Artificial Intelligence will not be achieved by todays computer
chips and smart software. What is needed is a new type of computer -
one that is physically wired differently.
I like the first statement. It's very concise, while capturing a
great deal of meaning, and I can relate to it ... it "jives".
However, (and although Hawkins backs up the statements fairly
convincingly) I don't like the second set of statements. As a
software architect (previously at Microsoft, and currently at Charles
Schwab where I am writing a custom business engine, and workflow
system) it scares me. It scares me because, although I have no
formal training in AI / Cognitive Science, I love the AI field, and
am hoping that the AI puzzle is "solvable" by software.
So - really, I'm looking for some of your gut feelings as to whether
there is validity in what Hawkins is saying (I'm sure there is
because there are probably many ways to solve these type of
challenges), but also as to whether the solution(s) its going to be
more hardware - or software.
Thanks,
~Aki
P.S. I remember a video I saw, where Dr. Sam Adams from IBM stated
"Hardware is not the issue. We have all the hardware we need".
This makes sense. Processing power is incredible. But after reading
Hawkins' book, is it the right kind of hardware to begin with?
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303