Eric Baum wrote:
Richard> Eric Baum wrote:
Richard> Every step of the following argument begs questions and lacks
Richard> force:
If you want a more complete argument, read the book. One of the
reasons for writing a book is not to have to engage in arguments
piecemeal.
Eric
Richard> I read your paper.
Richard> I am not sure how I would be helped by a "more complete"
Richard> version of an argument that already appears to be so severely
Richard> broken.
Richard> As I see it, the root cause of the trouble (in your paper, in
Richard> the literature behind that paper, and in the sequence of
Richard> arguments you summarized) is that some people have been so
Richard> obsessed with the idea of turning concepts like
Richard> "understanding" and "intelligence" into precisely formulated
Richard> concepts, amenable to mathematical proofs, that they are
Richard> willing to redefine and distort the meanings of those words
Richard> in order to strait-jacket them into the form they desire. In
Richard> this manner, you quote the COLT literature as having shown
Richard> that understanding is, essentially, compression.
My apologies for creating this misconception in your mind. The COLT
literature in no way claims that understanding is equivalent to
compression. It does not discuss understanding.
The COLT literature more or less proved that generalization in various
contexts is equivalent to finding hypotheses from simple classes.
Incidentally, these classes may or may not be viewed as compression,
one alternative criterion is finite VC-dimension (see chapter 4 of
WIT? for more details). There are also Bayesian viewpoints etc.
But in any case, these mathematical results mostly apply to prediction
of concepts, eg you see a series of pictures of chairs and not-chairs,
and desire to predict whether a new picture contains a chair or not.
The extrapolation to an explanation of understanding is mine,
so I bear whatever blame you may wish to assign.
This is where the distinction between exploiting structure and simply
finding a compact representation comes in. I am no longer talking
merely about finding a compact representation of some data, I am
extrapolating to a compact program that solves a variety of naturally
presented problems. It also is where mathematical proofs go out.
I would hope reading the book would give you a better appreciation
for why I think understanding comes from finding an Occam code.
I believe my book presents a straightforward explanation of what
understanding is and how it arises, that is consistent with all data
of which I'm aware, and which is very natural in the context of the COLT
results regarding concept generalization. I'm not aware of any
other theory of understanding that meets
these standards.
In light of what you say, I'll see what I can do to look at your book.
The issue I was raising, about redefining concepts, is something that I
think is a very deep issue ... where you talk, above, about the COLT
literature has proved facts about generalization, categorization, etc.,
I see many examples of the redefinition problem, but they are incredibly
painful and time-consuming to sort out. The general theme is that
people take a commonsense concept like "concept" and then *implicitly*
redefine it such that categorization is a many-to-one mapping of
discrete examples to discrete concepts, with probabilities. By doing
this the notion of concept is rendered tractable .... but there is an
implicit statement about cognitive mechanisms and the discreteness of
the world in this approach, and (in my opinion at least) it is extremely
dubious. At the very least, since the assumptions are not made
explicit, their credibility is not discussed (at least not by the COLT
people, as far as I am aware).
This is not an issue specific to your point of view, it is a general
problem with many subfields of conventional AI.
As I say, I will try to see if your book contains material which evades
this trap .... my understanding of your paper made me suspect not, but I
will suspend judgment.
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303