On Wed, Jul 23, 2003 at 03:26:40PM +0100, Theodore Hong wrote: > Is there any way of doing the training incrementally? That is, > instead of retraining the model from scratch each time, is there an > algorithm that can incorporate new observations as they come in? For > the classification mode in particular, it seems that many samples may > not change the model at all if they are sufficiently far away from the > hyperplane dividing SUCCESS from FAIL. Similarly for forgetting old > data - if a sample is not a support vector, deleting it will have no > effect, right?
Hmmm, I didn't realize that it wasn't incremental. Given that each node is likely to have about 50 other nodes for which it must record data, and given that several thousand requests per hour would not be uncommon - is this going to be practical from a CPU/memory requirements perspective? Ian. -- Ian Clarke [EMAIL PROTECTED] Coordinator, The Freenet Project http://freenetproject.org/ Founder, Locutus http://locut.us/ Personal Homepage http://locut.us/ian/
pgp00000.pgp
Description: PGP signature
