On Thu, Jul 24, 2003 at 08:57:59AM -0700, Ian Clarke wrote: > It is also rather surprising that the BDA algorithm without useNearest > performed *better* than BDA with useNearest - do you have any hypothesis > as to why this might be? It might indicate a problem with the BDA > implementation.
I've just added a graph to illustrate this point in more detail: look at the bottom part of: http://homepages.cwi.nl/~cilibrar/ngrouting/ Here, you can see in the northwest corner the problem with the BDA-style algorithms: They see an aberration, and have reapplied that guess in the next 4 green crosses (or the next blue square), even though that point didn't "make sense" with regards to the rest of the data. This is why it doesn't make sense to only use SVM's when a certain crucial data threshhold is reached -- the counter-intuitive truth is that seemingly simpler and "more reliable" methods like exponential decay can wind up getting confused early and staying confused more, because they cannot differentiate model from noise. You can see this same problem again in sample 12 or so, and again near sample 16, and again at sample 55 or so. In reality, these algorithms are less reliable. Rudi _______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
