I think i am gonna do some tests with 5 host and 5 tests parallel.
50 tests per host is a bit to much and i don’t think that so much tests can run 
parallel.

Thanks,
Rene


Am 08.07.2014 um 01:40 schrieb Thomas Reinke <[email protected]>:

> Something to consider:
> 
> The number of nasl processes running (if we remove basic control
> tasks from the picture)  is the number of hosts being scanned at
> a time multipled by the number of scripts being executed by the
> host.
> 
> How you configure the two numbers above will result in drastically
> different memory footprint requirements:
> 
> E.g.  Let's say that we can support 100 simultaneous NASl scripts
>      on the hardware in question (CPU mainly).
> 
> If we say we can run 2 hosts, 50 scripts simultaneously each,
> 
>   a) Memory usage on the scanner is low;
>   b) Dependencies on scripts may limit effective parallel
>      processing;
>   c) You're throwing a lot of traffic at a given IP, and may
>      not get ideal response if the scanned server gets too
>      loaded;
> 
> 
> If, we go the opposite route, and scan 50 hosts, 2 nasl scripts
> at a time max, again,at most 100 simultaneous scripts, then
> 
>   a) Memory usage on the scanner sky rockets
>   b) Very high effective use of parallel processing
>   c) Long scan times per IP, because despite effective parallel
>      processing, it takes long to get through all scripts for
>      a given IP.
>   d) Low bandwidth being thrown at a given target at any time.
> 
> So, how you configure your scanning environment is highly dependent
> on what you need to accomplish.
> 
> I would suggest playing around with the settings mentioned, taking
> it to a couple of extremes to see where you can take things. For
> reference, we've still got some scanners working with only 1 Gig
> of RAM (NOT something I would recommend doing if you are setting up
> a new install!).
> 
> Hope that helps,
> 
> Thomas
> 
> 
> On 07/07/14 08:24 AM, Geoff Galitz wrote:
>> 
>> 
>> A lot of this really depends on which and how many plugins you use as well
>> as the size of your target object.  You'll potentially see a lot of forked
>> processes.
>> 
>> FWIW, I have a 4CPU 16GB RAM VM to scan /23 size networks (approx 500
>> hosts) with virtually all plugins enabled and configured.
>> 
>> -G
>> 
>> 
>> 
>>> As far as i am testing OpenVAS i didn’t need more then 2GB. But a few day
>>> ago linux killed openvas because it eats to much memory...
>>> I think i will take a quadcore with 4gb ram.
>>> 
>>> 
>>> Am 07.07.2014 um 13:31 schrieb Reindl Harald <[email protected]>:
>>> 
>>>> 
>>>> 
>>>> Am 07.07.2014 13:26, schrieb Eero Volotinen:
>>>>> Well, we are currently running two physical scanner servers and one
>>>>> very large amazon instance for our PCI scanners ..
>>>>> 
>>>>> Usually servers are running quad core processor and 32GB to 128GB of
>>>>> physical memory.
>>>>> So, it's based on my experiences on real production environments.
>>>> 
>>>> no it's not
>>>> 
>>>> experience would be "we tried it with less RAM but we had to
>>>> upgrade to 32 GB because it otherwise did not work" and not
>>>> "you need that much RAM because i have"
>>>> 
>>>> the most RAM is needed for the feed-sync and with 3 GB you are normally
>>>> fine
>>> 
>>> 
>>> 
>>> Yes you are right, most of the time it will be a default scan config. It´s
>>> okay if its not parallel, but it should not run just one scan a night.
>>> 
>>>> 
>>>> Do you need to run a lot of scans in parallel, or can a scan run lazily
>>>> all night?
>>>> Do you want to brute force / enumerate logins, or do you "just" run
>>>> discovery scans.
>>>> Etc etc…
>>> 
>>> Thanks for your fast responses,
>>> Rene
>>> _______________________________________________
>>> Openvas-discuss mailing list
>>> [email protected]
>>> https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss
>>> 
>>> 
>> 
>> 
>> ------------------------------
>> Geoff Galitz
>> http://www.galitz.org
>> 
>> _______________________________________________
>> Openvas-discuss mailing list
>> [email protected]
>> https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss
>> 
> 
> _______________________________________________
> Openvas-discuss mailing list
> [email protected]
> https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

_______________________________________________
Openvas-discuss mailing list
[email protected]
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

Reply via email to