On Mon, Dec 06, 2004 at 11:48:24AM -0400, Henry Stern wrote:
> After comparing autolearn to Hebbian learning in my thesis, I got to
> thinking:  The "count them up" method of learning probabilities is very
> good for building an initial classifier, but the probabilities tend to
> converge to a mean.  This means that it becomes geometrically slower
> over time for a personalised text classifier to adapt to a major change.
> 
> The probability table for the Bayes classifier can be trained using
> error backpropagation, the same way that I train the perceptron.  The
> only difference is that it requires a different delta rule.  I have
> derived it for the basic naive Bayes classifier:
...snips...
> Any comments?  Interest in co-authoring a research paper (*poke*,
> Vaishnavi and friends at UWash)?  Has this been done before?

A chap called Simon Byrnand <[EMAIL PROTECTED]> came up, back in
July, with a patch to SA2.6 to make it do Bayes learning by exception,
rather than learning everything regardless.  His idea was to behave as
at present whilst learning the initial 200 spam/200 ham, but once Bayes
came into play, the learner would discard messages that scored BAYES_99
or BAYES_00 rather than reinforcing that pattern.  Sadly I didn't have
time to try the patch, but I do have a copy of it if you are interested
to compare it with your approach.

All the papers I've read that examine learning systems find that they
work best when being trained by exception, and this is something I would
like to see in SA's Bayes.  At present, it is very hard to correct a
Bayes misclassification if a system has been "trained to death".

Nick
-- 
http://www.leverton.org/                      ... So express yourself

Reply via email to