If you run a 'pstree' and see the process relationship, you'll get a clearer picture. What you're seeing is the Scan-IP task's memory consumption steadily going up, because it is consuming test results from each child script it launched, one after another.
So, as it's memory steadily grows, it _appears_ as if the amount of memory for each child it forks to run the next script has grown as well. But in reality, that's not quite true, because of how the kernel handles forked processes. While "ps" shows the memory theoretically used by each process, the majority of that memory is still shared (a unique copy is made to each process only when that process _changes_ the value of the memory - "copy-on-write"). The processes running the actual script (at the bottom of the chain process tree hierarchy) are not where there are memory challenges with openvas... Thomas On 29/11/12 03:27 AM, Jan-Christopher Brand wrote: > Hi, > thanks for the long answer :-) > > But I don't know, if we talk about the same thing. > So I describe how I found the 3.2mb: > > The Scanner is idling and needs for example 70mb. > Then I start a task with one script and one IP and the memory increases to > 73,2mb (I think I have 3 processes then, but 1 is stopped when the task is > finished.) So because of the fork I have two processes with 73,2mb, but as I > said: One is stopped after the test. > The memory of one process stays always the same. But when I start again one > task, the memory of two of the three processes is increased to 76,4mb. Then > again one process disappears and one has the constant memory like before. > The next test is started (after the one before is stopped) and the two > processes need 79,6mb and again after the test one of them is killed and the > other doesn't free the 3.2mb and the third process takes again constant > memory. > And so on... > > So there's always the one with constant memory. > Another one is forked for each task started and needs every time 3.2mb more > memory (both do, but one is killed after each task). > The problem is not, that while running the task it needs to much memory, but > that 3.2mb are not freed after the test is stopped and with each new task > (after the one before is finished) again 3.2mb are added. > > I hope this was understandable ;-) Don't know why it's so complicated to > describe ;-) > > > Mit freundlichen Grüßen, > > Jan-Christopher Brand > > > -----Ursprüngliche Nachricht----- > Von: Openvas-devel [mailto:openvas-devel-boun...@wald.intevation.org] Im > Auftrag von Thomas Reinke > Gesendet: Mittwoch, 28. November 2012 20:13 > An: openvas-devel@wald.intevation.org > Betreff: Re: [Openvas-devel] Maybe Memory-Leek? > > This is a known issue. > > The basic memory consumption model is as follows: > > Scan Daemon Parent - Memory consumed to read in pertinent script info Amount > of memory consumed directly proportional to the # of scripts you have. > > Scan Task (child of Scan Daemon) - one forked for each connection opened to > service a client request. Memory from parent would ideally be in a > "copy-on-write" mode, but it appears that all of the parent's memory is > copied, probably due to memory structure changes as the parent gets ready to > begin running a scan (building deps? setting other flags not previously set?). > > Scan-IP (child of Scan Task) - one forked for each IP address that is to be > scanned. > > Script Execution (child of Scan-IP) - one forked for each new script. > Again, has copy-on-write memory, so while 'ps' will show high memory usage, > an overall system view of memory consumption shows only marginal increase in > memory usage for most scripts. > > If we take a typical scan request of, say, a class C network, your optimimum > platform memory consumption will be in a model where a client connects up to > the scanner, and passes all IP addresses to be scanned in a single request. > If we were to allow all IP addresses to be scanned simultaneously, we would > have a total of 1+1+ConcIP+256*ConScript processes running, where ConcIP is > the Concurrent # of IPs being tested, and ConScript is the concurrent # of > scripts executed at any one time against a given IP. > > The worst scenario is to have a separate client connection to the scanner for > each IP to be tested, in which case we would have > 1+ConcIP+ConcIP+256*ConScript. Since the memory consumption > is primarily on the first 3 values, it becomes important to put multiple > targets into a single request (i.e. minimize client to scanner daemon > connections) so as to minimize memory usage. > > Thomas > > > On 28/11/12 10:09 AM, Jan-Christopher Brand wrote: >> Hi, >> >> >> >> I saw that with each task started, the OpenVAS-Scanner needs about 7mb >> more memory. After I updated to the newest version from trunk - >> because I've seen something about fixed memory-leeks - it got better, >> but still each start of a task adds 3,2mb memory usage to the scanner. >> Is this a known behavior? And will this be fixed? I'm starting >> thousands of tasks after each other, so the 3,2mb are quite much ;-) >> >> >> >> Mit freundlichen Grüßen, >> >> >> >> Jan-Christopher Brand >> >> >> >> >> >> _______________________________________________ >> Openvas-devel mailing list >> Openvas-devel@wald.intevation.org >> https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-dev >> el > > _______________________________________________ > Openvas-devel mailing list > Openvas-devel@wald.intevation.org > https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-devel > _______________________________________________ > Openvas-devel mailing list > Openvas-devel@wald.intevation.org > https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-devel > _______________________________________________ Openvas-devel mailing list Openvas-devel@wald.intevation.org https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-devel