On 7/13/2015 2:07 AM, Jan-Oliver Wagner wrote:
True, the manager requests the current progress/status from the slaves and updates that info in the sqlite DB But the manager spawns/forks a child process per slave and those children processes are the ones fighting for a flock on the sqlite (causing the UI to "hang" because the manager process dealing with the requests from GSA also has to wait for a sqlite flock).On Montag, 6. Juli 2015, Ryan Schulze wrote:Were the slaves running scans at the same time? We had problems here with such a setup because the slaves were constantly reporting back their results (during the scan) to the master and the master was busy fighting for file locks on the sqlite DB. It was impossible to do anything on the master as long as 2 or more instances were running scans at the same time. I even put the sqlite on a ramdisk to see if it was just slow disk I/O causing the problem, but the problem still persisted.I've not experienced serious trouble with more than two slaves, using sqlite. Actually the manager connects the slaves for status - not vice versa.
In my case it may have had to do with the size of the scans. I have a few thousand IPs to scan and split them up into /24 networks and tried to keep the scan profiles as small as possible. But even then there is a lot of information to be transferred with the status/progress updates.
On the bright side I've had good experience with using PostgreSQL as a backend for the central master :-)While sqlite does database-level locking, PostgreSQL does table-level locking and thus indeed should handle some situations better than sqlite. Are you using OpenVAS-8 with PG or trunk?
Right now I'm using the OpenVAS 8 branch, but I've been playing around with trunk on my dev/test servers. I'm really liking the PG as a backend so far.
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ Openvas-discuss mailing list [email protected] https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss
