I haven't done any experiments comparing training and learn on error and
learn on everything. I've read that paper before and don't remember
being particularly impressed by it.
Henry
Sidney Markowitz wrote:
Henry,
In the paper An Assessment of Case-Based Reasoning for Spam Filtering
http://www.comp.dit.ie/sjdelany/publications/AICS%202004%20(crc).pdf
the authors compare CBR and a naive Bayes (NB) with one conclusion (on
their test data, with their implementation of NB) that daily updating
of the training data using misclassified mails caused an improvement
in FPs but a degradation in FN rate that led to an overall negative
effect on their measure of performance.
How does that compare to your results on the effect of training and
learn on error vs learn on everything?
If CBR does end up better than NB when used with learn on error, that
is an advantage in terms of computational resources required.
-- sidney