I think "On Intelligence" is a good book.  It made an impact on
me when I first read it, and it lead to me reading a lot more neuro
science since then.  Indeed in hindsight is seems strange to me
that I was so interested in AGI and yet I hadn't seriously studied
what is known about how the brain works.  Indeed almost nobody
in AI does, even people working on artificial neural networks.

Anyway, having read a fair amount of neuro science since then
it has become clear to me that while Hawkins' book is a good
understandable summary of a particular view of neuro science,
the claims he makes are all either well known facts, or things
which a fair number of neuro scientists already believe.  So there
isn't anything really new in there that I know of.

The other thing is that he presents a greatly simplified view of how
things really work.  This makes the book readable for non-scientists,
which is great, however nobody really knows how much of all those
details he glosses over are unimportant implementation stuff, and how
much of it is critical to understanding how the whole system behaves.
Of course, nobody will really know this for sure until the brain is fully
understood.

If you're read On Intelligence and are interested in a basic undergraduate
overview of neuroscience I'd recommend the classic text book "Essentials
of Neural Science and Behavior" by Kandel, Schwartz and Jessell.  Once
you've read that much of the scientific literature in the field is
understandable.

Shane



On 2/21/07, Aki Iskandar <[EMAIL PROTECTED]> wrote:

I'd be interested in getting some feedback on the book "On
Intelligence" (author: Jeff Hawkins).

It is very well written - geared for the general masses of course -
so it's not written like a research paper, although it has the feel
of a thesis.

The basic premise of the book, if I can even attempt to summarize it
in two statements (I wouldn't be doing it justice though) is:

1 - Intelligence is the ability to make predictions on memory.
2 - Artificial Intelligence will not be achieved by todays computer
chips and smart software.  What is needed is a new type of computer -
one that is physically wired differently.


I like the first statement.  It's very concise, while capturing a
great deal of meaning, and I can relate to it ... it "jives".

However, (and although Hawkins backs up the statements fairly
convincingly) I don't like the second set of statements.  As a
software architect (previously at Microsoft, and currently at Charles
Schwab where I am writing a custom business engine, and workflow
system) it scares me.   It scares me because, although I have no
formal training in AI / Cognitive Science, I love the AI field, and
am hoping that the AI puzzle is "solvable" by software.

So - really, I'm looking for some of your gut feelings as to whether
there is validity in what Hawkins is saying (I'm sure there is
because there are probably many ways to solve these type of
challenges), but also as to whether the solution(s) its going to be
more hardware - or software.

Thanks,
~Aki

P.S.  I remember a video I saw, where Dr. Sam Adams from IBM stated
"Hardware is not the issue.  We have all the hardware we need".
This makes sense.  Processing power is incredible.  But after reading
Hawkins' book, is it the right kind of hardware to begin with?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to