On Tue, Dec 16, 2008 at 10:21:44AM +0200, Henrik K wrote:
> On Mon, Dec 15, 2008 at 03:58:40PM -0800, John Hardin wrote:
> >
> > You should be able to run base SA, a bayes database (you'll probably want 
> > to avoid autolearning) and *some* custom rules. You might not be able to  
> > use the larger custom rules like the Sought sets - try them and see.
> 
> Having some custom rules makes little difference. SA base code is huge.
> Sought is small, we are talking about one or two MBs. This advice comes from
> the age of *large* rules like blacklist.cf.
> 
> Bayes has little effect on memory. If you use it as flat BerkeleyDB file,
> only thing it might "take" is OS disk cache. And if it's on flash, access
> should be very fast. I don't see anything preventing autolearning.
> 
> I've run full SA, ClamAV, MySQL, named, websites etc on 256MB. You do need
> swap for it. If you have a filesystem, then you can create a swap file on
> it.
> 
> Of course you cannot expect it to perform miracles. You can have one or two
> concurrent scans at maximum.

Left out the real info. A normal "rule heavy" SA process takes ~50-70MB
memory. So you can even run a few of them, depending on what else is
running.

If you need raw performance, then only use your MTA with normal RBL checks
etc, ClamAV + 3rd party rules, and maybe some milters for URI checking and
such..

Reply via email to