Hani,

- Can you get a backtrace for the stuck processes ? The 4 nvts ones + the 
"testing xxxx" parent.
- Could it be related to your Redis setup ? You can monitor it with: 
redis-cli -s /tmp/redis.sock MONITOR

Strange, it floods with:

"""
...
1495622782.133505 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.94181:category"
1495622782.133559 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.869756:category"
1495622782.133601 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.870215:category"
1495622782.133667 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.902298:category"
1495622782.133728 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.869970:category"
1495622782.133813 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.865412:category"
1495622782.133859 [1 unix:/var/run/redis/redis.sock] "SRANDMEMBER" 
"oid:1.3.6.1.4.1.25623.1.0.801558:category"
...
"""

Strace on openvassd show's:

"""
...
read(5, "$1\r\n3\r\n", 16384)           = 7
write(5, "*2\r\n$11\r\nSRANDMEMBER\r\n$40\r\noid:1"..., 69) = 69
read(5, "$1\r\n3\r\n", 16384)           = 7
write(5, "*2\r\n$11\r\nSRANDMEMBER\r\n$41\r\noid:1"..., 70) = 70
read(5, "$1\r\n3\r\n", 16384)           = 7
write(5, "*2\r\n$11\r\nSRANDMEMBER\r\n$41\r\noid:1"..., 70) = 70
read(5, "$1\r\n3\r\n", 16384)           = 7
write(5, "*2\r\n$11\r\nSRANDMEMBER\r\n$40\r\noid:1"..., 69) = 69
...
"""

Both redis and openvassd are consuming all CPU resources together.

- If you're able to build from source, do you see this issue with current 
openvas-9 branch, and with trunk branch too ?

I build from source, currently running from 
http://www.openvas.org/install-source.html on Ubuntu 16.04 LTS:

openvas-libraries-9.0.1.tar.gz
openvas-manager-7.0.1.tar.gz
openvas-scanner-5.1.1.tar.gz

Before I get in to more debugging (bit short on time today to dive in to this), 
perhaps this information explains the problem?
It gets stuck on just 2 now (ssh_authorization.nasl and netbios_name_get.nasl) 
so it seems to not be the actual nasl scan being the issue but something with 
my openvassd and redis part of the scanner.
I run 4 scanners with the same setup/version's and there are some jobs that 
completed without any issue.

When I stop this scan, my openvassd.messages log says:

"""
[Wed May 24 10:53:23 2017][19411] Stopping the whole test (requested by client)
[Wed May 24 10:53:23 2017][19411] Stopping host XXX scan
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2014/gb_windows_services_stop.nasl (1.3.6.1.4.1.25623.1.0.804787)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
unknown_services.nasl (1.3.6.1.4.1.25623.1.0.11154)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
find_service_nmap.nasl (1.3.6.1.4.1.25623.1.0.66286)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
gb_nist_win_oval_sys_char_generator.nasl (1.3.6.1.4.1.25623.1.0.802042)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
host_scan_end.nasl (1.3.6.1.4.1.25623.1.0.103739)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
gb_tls_version.nasl (1.3.6.1.4.1.25623.1.0.103823)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/GSHB_M4_007.nasl (1.3.6.1.4.1.25623.1.0.94177)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL13/GSHB_M4_007.nasl (1.3.6.1.4.1.25623.1.0.94108)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
Policy/gb_policy_tls_passed.nasl (1.3.6.1.4.1.25623.1.0.105781)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL11/GSHB_M4_007.nasl (1.3.6.1.4.1.25623.1.0.894007)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching kb_2_sc.nasl 
(1.3.6.1.4.1.25623.1.0.103998)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2011/system_characteristics.nasl (1.3.6.1.4.1.25623.1.0.103999)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2016/nvt_debugging.nasl (1.3.6.1.4.1.25623.1.0.111091)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL10/GSHB_M4_007.nasl (1.3.6.1.4.1.25623.1.0.94007)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL12/GSHB_M4_007.nasl (1.3.6.1.4.1.25623.1.0.94013)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL12/GSHB-12.nasl (1.3.6.1.4.1.25623.1.0.94000)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL10/GSHB-10.nasl (1.3.6.1.4.1.25623.1.0.95000)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL11/GSHB-11.nasl (1.3.6.1.4.1.25623.1.0.895000)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
cpe_inventory.nasl (1.3.6.1.4.1.25623.1.0.810002)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
pre2008/scan_info.nasl (1.3.6.1.4.1.25623.1.0.19506)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/GSHB.nasl (1.3.6.1.4.1.25623.1.0.94171)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2013/gb_os_eol.nasl (1.3.6.1.4.1.25623.1.0.103674)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
GSHB/EL13/GSHB-13.nasl (1.3.6.1.4.1.25623.1.0.94999)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
Policy/gb_policy_tls_violation.nasl (1.3.6.1.4.1.25623.1.0.105780)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
pre2008/check_ports.nasl (1.3.6.1.4.1.25623.1.0.10919)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2016/gb_default_ssh_credentials_report.nasl (1.3.6.1.4.1.25623.1.0.103239)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2017/gb_default_http_credentials_report.nasl (1.3.6.1.4.1.25623.1.0.103240)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2011/host_details.nasl (1.3.6.1.4.1.25623.1.0.103997)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2013/gb_host_scanned_ssh.nasl (1.3.6.1.4.1.25623.1.0.103625)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2013/gb_host_scanned_wmi.nasl (1.3.6.1.4.1.25623.1.0.96171)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
Policy/gb_policy_cpe.nasl (1.3.6.1.4.1.25623.1.0.103962)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
2009/cpe_policy.nasl (1.3.6.1.4.1.25623.1.0.100353)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
Policy/gb_policy_cpe_violation.nasl (1.3.6.1.4.1.25623.1.0.103964)
[Wed May 24 10:53:31 2017][19500] Stopped scan wrap-up: Launching 
Policy/gb_policy_cpe_ok.nasl (1.3.6.1.4.1.25623.1.0.103963)
[Wed May 24 10:53:31 2017][19411] Test complete
[Wed May 24 10:53:31 2017][19411] Total time to scan all hosts : 702 seconds
"""

Seems like it wanted to start more but never could.
Perhaps I have to rebuild the redis DB; any and all tips are welcome of course.


Thijs Stuurman
Security Operations Center | KPN Internedservices
[email protected] | [email protected]
T: +31(0)299476185 | M: +31(0)624366778
PGP Key-ID: 0x16ADC048 (https://pgp.surfnet.nl/)
Fingerprint: 2EDB 9B42 D6E8 7D4B 6E02 8BE5 6D46 8007 16AD C048

W: https://www.internedservices.nl | L: http://nl.linkedin.com/in/thijsstuurman




-----Oorspronkelijk bericht-----
Van: Hani Benhabiles [mailto:[email protected]] 
Verzonden: woensdag 24 mei 2017 12:10
Aan: Thijs Stuurman <[email protected]>
CC: [email protected]
Onderwerp: Re: [Openvas-discuss] OpenVAS9 hanging nasl tasks

On 2017-05-23 12:05, Thijs Stuurman wrote:
> OpenVAS discuss list,
> 
> I ran a few scans with my new OpenVAS9 setup and all worked well.
> Now I am starting a lot of scans and noticing most of 'm are hanging 
> on  the exact same 4 tests:
> 
> |   \_ openvassd: testing xxx
> (/opt/openvas9/var/lib/openvas/plugins/ssh_authorization.nasl)
> |   \_ openvassd: testing xxx
> (/opt/openvas9/var/lib/openvas/plugins/netbios_name_get.nasl)
> |   \_ openvassd: testing xxx
> (/opt/openvas9/var/lib/openvas/plugins/pre2008/tcp_port_zero.nasl)
> |   \_ openvassd: testing xxx
> (/opt/openvas9/var/lib/openvas/plugins/2012/secpod_database_open_acces
> s_vuln.nasl)
> 
> Is anyone else experiencing this? Is this a known issue? I updated the 
> NVT's etc' yesterday.
> The processes run for an hour+.
> Killing defuncts the process. I am unable to continue in any way 
> except kill and abort the whole scan.
> 
> 

Hi Thijs,

- Can you get a backtrace for the stuck processes ? The 4 nvts ones + the 
"testing xxxx" parent.
- Could it be related to your Redis setup ? You can monitor it with: 
redis-cli -s /tmp/redis.sock MONITOR
- If you're able to build from source, do you see this issue with current 
openvas-9 branch, and with trunk branch too ?

Best regards,

Hani.
_______________________________________________
Openvas-discuss mailing list
[email protected]
https://lists.wald.intevation.org/cgi-bin/mailman/listinfo/openvas-discuss

Reply via email to