Ok...have some additional information to add now - it would seem that a raw start up of Nessus was consuming approximately 20M. The same start-up on OpenVAS consumes >100M. It also seems that the child processes are constrained by the maximum size of that startup, i.e. we never see the memory of an actual script processing task exceed that memory usage.
The factor of 5 (approximately 4K memory consumption per script in the test suite) appears to be what is causing the difficulties. Each client connecting to the scanner eventually moves, from what we can tell, from a small memory foot print up to the maximum of it's parent. That in turn translates to each child task handling the nasl script process eventually having a larger and larger memory footprint. It means that to avoid swapping on a box, a 1 Gig platform is limited to approximately 10 concurrent tasks. That's a pretty significant hit. Any way of knocking that memory consumption back down? I'm not sure assuming that folks have 4Gig+ in their scanning platform is a reasonable assumption. Thomas Thomas Reinke wrote: > Hi all, > > We're using openvas scanning daemon 3.1 only (have our own > customized client for controlling scans), and we've noticed > that with the change from nessus to openvas, the memory footprint > has ballooned in a huge way. > > Currently, the typical amount of memory being consumed by > processes is averaging around 40 Meg per process, but many > tasks running over 100Meg. > > We've already throttled our scan utiliziation to one third > of what it used to be, (translates on a 1 Gig system to > a limit of 18 IPs concurrently tested (one per client), and > 2 simultaneous tests per client, meaning 36 active tests, > and with the memory consumption we're seeing, it's still > triggering swap conditions. There are a few other signs > that are pointing to a disturbing amount of memory consumption > in some cases. (cases where swap space of an additional 1.5 > Gig was exhausted, again running only 18 concurrent clients > max). > > I realize the daemon change is a number of releases up, > but the memory footprint seems to be a bit of an excessive > change given that it's the same test suite being run. > > Has anyone else noticed this problem? (Or is it even a > problem, but a known result of other changes) Or are > we looking at a classic memory leak? > > Right now, it would appear that to get things back on track > to avoid swapping, we'd have to drop our load to about > 25% of what could be handled previously per platform, which is > a bit of a pain, even should we try to upgrade hardware - we'd > have to go from 1 Gig to 4 Gig memory per scanning platform > just to break even. > > Thomas > > _______________________________________________ > Openvas-devel mailing list > Openvas-devel@wald.intevation.org > http://lists.wald.intevation.org/mailman/listinfo/openvas-devel > _______________________________________________ Openvas-devel mailing list Openvas-devel@wald.intevation.org http://lists.wald.intevation.org/mailman/listinfo/openvas-devel