and 2) reduce the score of, or disable entirely, tests that fire often on your ham.
To help the false negative issues, try some of the SARE rulesets.
Scot L. Harris wrote:
I have been running spamassassin at home for well over a year with great results. No false positives and very few spam getting through.
First of the year I setup spamassassin 2.63 on the server at work. Initially the results were great. Similar to the results I had at
home.
However over the past couple of weeks I have seen an increase in the number of spam getting through as well as a number of false positives.
As a result I have been going back over the documentation to see what I
can do to improve the situation. I suspect the problem may be the shear
volume of spam that has been fed to the bayes database. Currently I
have a little over 47000 spam and less than 4000 ham in the database. Each day they have been adding several hundred spam to the database and
sometimes over the weekend there will be several thousand spam added.
[EMAIL PROTECTED] spamuser]$ sa-learn --dump magic 0.000 0 2 0 non-token data: bayes db version 0.000 0 47706 0 non-token data: nspam 0.000 0 3842 0 non-token data: nham 0.000 0 114152 0 non-token data: ntokens 0.000 0 1088208242 0 non-token data: oldest atime 0.000 0 1088534618 0 non-token data: newest atime 0.000 0 1088534092 0 non-token data: last journal sync atime 0.000 0 1088423773 0 non-token data: last expiry atime 0.000 0 86400 0 non-token data: last expire atime delta 0.000 0 80248 0 non-token data: last expire reduction count
Would I improve things by flushing the entire database and starting over? Is the differential in spam vs. ham causing the scores to drift and should I try to keep these numbers closer?
Any other recommendations on normal maintenance that should be done? I am using the packaged rule sets as well as the bayes database.
Thanks for any assistance you can provide.
