Hi,

A couple weeks ago, I finally got around to finishing Eric Baum's book
"What Is Thought?"

I thought of writing a formal review, but I haven't found the time and
suspect I won't, so for now I'll just write a long email instead.

First of all, I think the book gives an excellent review of various
aspects of modern computing and cognitive science.  For the educated
layperson who's willing to sweat a bit, it's a valuable resource just
for this reason.

Regarding the scientific thesis, it seems to me that the main
achievement of the book is that it elaborates the theory of Solomonoff
induction in the context of contemporary cognitive science.  This is not
a small achievement.  [In fact, I think I covered a lot of the same
ground in some of my books in the early and mid 90's, but I didn't do so
as clearly or accessibly as Baum does here.]

By Solomonoff induction I mean the theory that what the mind does is
seek simple algorithmic models of reality (and itself).

One peculiarity is that Ray Solomonoff and his work are not explicitly
mentioned even though Baum's crucial point seems about the same as
Solomonoff's.  However, algorithmic information theory (AIT) and minimum
description length theory are mentioned.  AIT was independently
developed by Solomonoff, Chaitin and Kolmogorov, but of those three
inventors, Solomonoff was the only one who initially saw the theory's
implications for AI and cognitive science (which are the implications
Baum focuses on).

Along with this omission, Baum also omits to mention recent work done in
the Solomonoff-induction vein, including work by  Marcus Hutter and
Juergen Schmidhuber that has been discussed on this list.  This is an
odd omission, because Hutter's work is in many ways a more rigorous
version of some of the ideas Baum puts forth in his book.  

However, Solomonoff, Hutter and Schmidhuber basically dwell in the realm
of mathematical abstraction.  Baum ties in the "Occam's Razor" approach
with all sorts of other things, such as evolutionary theory,
linguistics, and the psychology of heuristics.  For this reason his book
is a valuable and fascinating contribution.

He analyzes DNA as encoding a powerful "inductive bias," which biases us
to search for certain types of compact programs summarizing the data we
perceive in the world and in ourselves.  This is doubtless the case, and
is an important point of view to get across.

My personal scientific opinion, however, is that he overstates the case
for this latter point.  This is my main disagreement with Baum as a
theorist.  It's not exactly a critique of his *book*, however, as the
book puts forth his own point of view very well.

Baum basically believes that creating an AI isn't plausible in the near
or medium term without recreating in fairly much detail the particular
inductive biases that live in the human brain.  My own belief is that
this is only the case if one wants to create a very closely human-like
AI.  In essence, Baum seems to think that the human mind consists of:

* Fairly simple algorithms for finding compact programs summarizing data
* A lot of specific guidelines for how to find compact programs in
specific domains of evolutionary value to humans

I agree that the human mind contains both of these.  However, I ascribe
a greater role than he does to complex algorithms for finding compact
programs summarizing data in broad classes of domains.  I think that we
can create such complex algorithms and embody them in digital computer
programs and thus achieve human-level and greater intelligence --
without exactly copying the complex algorithms of this nature that exist
in the human mind.  

Among other topics, Baum describes his own AI program Hayek, which is a
very interesting approach to the credit assignment problem, but which
fails to be an adequately flexible and effective "algorithm for finding
compact programs summarizing data in a broad class of domains."  Hayek,
fascinating as it is, typifies the "toy systems" that are common in
mainstream AI today.  I don't think toy systems like this are going to
lead anyone to AGI; I think people need to build integrative systems,
and that the right kinds of algorithms exist only in diverse
self-organizng networks that embody mixtures of agents at various levels
of specialization and generality.  But OK -- you already know my schtick
-- this was supposed to be about Baum's book ;-)

Well Eric -- thanks for a stimulating read!

-- Ben G




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to