Dear Aki Iskandar

My 2 cents.

I haven't read the book "On Intelligence", but from the book flowed a
"proof-of-concept" program which I analyzed thoroughly. I read the source of
the program, analyzed the Dileep George and Hawkin's papers etc.

The idea of the proof-of-concept is basically a combination of a Bayesian
networks and a pyramid classifier network. Nodes in the pyramid are
extremely simplistic and store co-occuring temporal and spatial patterns.
Once the pyramid completed learning, it is 'transformed' into a Bayesian
belief-propagation network using conditional probability matrices.

Although it is a nice idea, it is nothing new, and there are quite a few
limitations of this design. First of all, the network layout is fixed by
design. Second, the type of learned pattern's is only of the "co-occurance"
type. There are more limitations. They pretend that co-occurance is the only
building block a system needs to build a model of its environment. I don't
believe thats true. What they are forgetting is that the brain has processes
that operate on the patterns themselves on a deeper level. Things like
thought, dreams etc. have an important function on how we organize the model
in our head. What would be interested in a complete model that not only sais
how we obtain patterns, but also how the patterns self-organize to be
usefull.

Durk Kingma


On 2/21/07, Aki Iskandar <[EMAIL PROTECTED]> wrote:

I'd be interested in getting some feedback on the book "On
Intelligence" (author: Jeff Hawkins).

It is very well written - geared for the general masses of course -
so it's not written like a research paper, although it has the feel
of a thesis.

The basic premise of the book, if I can even attempt to summarize it
in two statements (I wouldn't be doing it justice though) is:

1 - Intelligence is the ability to make predictions on memory.
2 - Artificial Intelligence will not be achieved by todays computer
chips and smart software.  What is needed is a new type of computer -
one that is physically wired differently.


I like the first statement.  It's very concise, while capturing a
great deal of meaning, and I can relate to it ... it "jives".

However, (and although Hawkins backs up the statements fairly
convincingly) I don't like the second set of statements.  As a
software architect (previously at Microsoft, and currently at Charles
Schwab where I am writing a custom business engine, and workflow
system) it scares me.   It scares me because, although I have no
formal training in AI / Cognitive Science, I love the AI field, and
am hoping that the AI puzzle is "solvable" by software.

So - really, I'm looking for some of your gut feelings as to whether
there is validity in what Hawkins is saying (I'm sure there is
because there are probably many ways to solve these type of
challenges), but also as to whether the solution(s) its going to be
more hardware - or software.

Thanks,
~Aki

P.S.  I remember a video I saw, where Dr. Sam Adams from IBM stated
"Hardware is not the issue.  We have all the hardware we need".
This makes sense.  Processing power is incredible.  But after reading
Hawkins' book, is it the right kind of hardware to begin with?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to