Hi, when I setup multiple scheduled tasks (50+), I see that one CPU core
spikes to 100% and remains spiked while the query completes (which can take
up to 20+ seconds). During that timeframe I cannot perform any additional
tasks through the GSA interface as it's trying to use the same CPU Core
(and waits for the previous query to complete). Is there a way to take
advantage of the parallel query feature in postgresql 9.6+ so that the
queries can use multiple cores? Or can I setup any indexes that will speed
up this query? The function I'm talking about is:
[init_task_schedule_iterator (iterator_t* iterator)] in the file

See output below for more info:

select * from pg_stat_activity;

datid datname pid usesysid usename application_name client_addr
client_hostname client_port backend_start xact_start query_start
state_change wait_event_type wait_event state backend_xid backend_xmin query
16392 gvmd 31441 16388 root gvmd NULL NULL -1 2018-01-31 21:24:01:141
2018-01-31 22:21:11:751 2018-01-31 22:21:11:756 2018-01-31 22:21:11:756
NULL NULL active NULL 928455

SELECT tasks.id, tasks.uuid, schedules.id, tasks.schedule_next_time,
schedules.period, schedules.period_months, schedules.byday,
schedules.first_time, schedules.duration, users.uuid, users.name,
schedules.timezone, schedules.initial_offset FROM tasks, schedules, users
WHERE tasks.schedule = schedules.id AND tasks.hidden = 0 AND
((tasks.owner   = (users.id))  OR EXISTS (SELECT * FROM
permissions             WHERE name = 'Super'             AND ((resource =
0)                  OR ((resource_type = 'user')                      AND
(resource = tasks.owner))                  OR ((resource_type =
'role')                      AND (resource                           IN
(SELECT DISTINCT role                               FROM
role_users                               WHERE
"user"                                     =
tasks.owner)))                  OR ((resource_type =
'group')                      AND (resource                           IN
(SELECT DISTINCT "group"                               FROM

Thanks, TN
Openvas-devel mailing list

Reply via email to