http://bugzilla.spamassassin.org/show_bug.cgi?id=3821





------- Additional Comments From [EMAIL PROTECTED]  2004-09-27 05:22 -------
The bug shows two principle problems with perceptrons: 
1.) They are only guaranteed to converge on a local optimum.
2.) They, in general, have not protection from overlearning, meaning that they
"learn the training data set by heart", failing to generalize to new cases
(messages not previously trained).

Both might have happened in the Bayes-Score example.  
(Also, note that the coding of the output from the Bayes-Classificator is
unneccessary hard to learn for the perceptron: One single Bayes-Score value
(with a real number from [0, 1]) would be much easier to learn.)

A real fix for the problem would be not to use perceptrons at all.  Other
machine learning algorithms (Boosting or Support Vector Machines) have much
better regularization properties and they are guaranteed to converge on a global
optimum.

Sure, the perceptron is an improvement over the GA.  But, IMHO, it is still not
the best way to go.



------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

Reply via email to