Mark,

First you attack me for making a statement which you falsely claimed
indicated I did not understand the math in the Collins' article (and
potentially discreted everything I said on this list).  Once it was show
that that attack was unfair, rather than apologizing sufficiently for the
unfair attack, now you seem to be coming back with another swing.  Now you
are implicitly attacking me for implying it is new to think you could deal
with vectors in some sort of compressed representation.

I was aware that there were previous methods for dealing with vectors in
high dimensional spaces using various compression schemes, although I had
only heard of a few examples.  I personally had been planning for years
prior to reading Collin's paper to score matches based mainly on the number
of similar features, and not all the dissimilar features(except in certain
cases) to avoid the curse of high dimensionalities.  

But I was also aware of many discussions, such as one in a current best
selling AI textbook, which implies that a certain problem becomes
intractable easily because it assumes one is saddled with dealing with the
full possible dimensionality of the problem space being represented, when it
is clear you can accomplish a high percent of the same thing with a GNG type
approach by only placing represention where there are significant
probabilities.

So, all though it may not be new to you, it seems to be new to some that the
curse of high dimensionality can often be avoided in many classes of
problems.  I was citing the Collins paper as one example for showing that AI
systems have been able to deal well with high dimensionality.  I attended a
lecture at MIT that a few years after the Collin's paper came out where the
major thrust of the speech was that recently great headway was being made in
many field of AI because people were beginning to realize all sorts of
efficient hacks that avoid many of the problems of combinatorial explosion
of high dimensionality that had previously thwarted their efforts.  The
Collins paper is an example of that movement.

When it was relatively new, the Collins paper was treated by several people
I talked to as quite a breakthrough, because in conjunction of the work of
people like Haussler it showed a relatively simple way to apply the Kernel
trick to graph mapping.  As you may be aware the Kernel trick not only
allows one to score matches, but also allows many of the analytical tools of
linear algebra to be applied through the kernel, greatly reducing the
complexity of applying such tools in the much higher dimensional space
represented by the kernel mapping.  I am not a historian of this field of
math, but in its day the Kernel trick was getting a lot of buzz from many
people in the field.  I attended an NL conference at CMU in the early '90s.
The use of support vector classifiers using the kernel trick was all the
rage at the conference, and the kernels they were use seemed much less
appropriate than that Collin's paper discloses.

Ed Porter


-----Original Message-----
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

>> THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO 
>> EXPLICITLY DEAL WITH 500K TUPLES

And I asked -- Do you believe that this is some sort of huge conceptual 
breakthrough?



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73199664-8396ea

<<attachment: winmail.dat>>

Reply via email to