Matt,

You asked "What would be a good test for understanding an algorithm?"

Thanks for posing this question. It has been a good exercise. Assuming that the key word here is "understanding" rather than algorithm, I submit -

A test of understanding is if one can give a correct *explanation* for any and all of the possible outputs that it (the thing to understand) produces.

I see this as having merit in explaining "understanding" because it shows that one grasps the transformations that occur in the system. If one is given a set of inputs and states, then one could state what the output would be by stepping through the transformations that take place.

This differs slightly from prediction because prediction demands that you be able to instrument every state and input for a given moment. This creates a distinction between it being "hard" or "extremely hard" or in practice "impossible" to predict vs. not so hard to "understand" what is going on.

Stan


Jim Bromer wrote:



----- Original Message ----
From: Matt Mahoney <[EMAIL PROTECTED]>
--- Jim Bromer <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

 > I don't want to get into a quibble fest, but understanding is not
 > necessarily constrained to prediction.

What would be a good test for understanding an algorithm?

-- Matt Mahoney, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
------------------------------------------------------
I don't have a ready answer for this. First of all, (maybe I do have a ready answer to this), understanding has to be understood in the context of partial understanding. I can understand something about a subject without being an expert in the subject, and I am Skeptical of anyone who claims that total understanding is feasible, (except for a bounded discussion of a bounded concept in which case I would only be skeptical with a small s.)

So to start with, I could say that understanding an algorithm could be defined by various kinds of partial knowledge of it. What kinds of input does it react to, and what kinds of internal actions does it take? What kind of output does it produce. Can generalizations of the input it takes, its internal actions and its output be made. What was it designed to do? Can relations between specific examples or derived generalizations of its input, its internal actions and its output be made.

While some of this kind of knowledge would require some kind of intelligence, others could be expressed in simpler data-concepts. Harnad's categorical grounding comes to mind. An experimental AI program would be capable of deriving data from the operation of an algorithm if its program was created around this paradigm of examining an algorithm. It could then create its own kind of analyses of the algorithm, and even though it might not be the same as an analysis that we might create, it still might be usable to produce something that would border on understanding.

The capacity of prediction is significant in the kind of derived generalizations and categorical exemplars that I am thinking of, but the concept of understanding must go beyond simple prediction.

Jim Bromer

Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. <http://us.rd.yahoo.com/evt=51733/*http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ > *agi* | Archives <http://www.listbox.com/member/archive/303/=now> <http://www.listbox.com/member/archive/rss/303/> | Modify <http://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to