Running Jenkins 2.138.1 and jobs are running on agents via AWS EC2 plugin 
and executing Jenkinsfile pipeline jobs on the agents.
The machine has 2 vCPU's & 8 gigs of memory. We configured Jenkins to use 6 
GB of ram via the Jenkins JVM config.

Our Jenkins server gets into a strange state once or twice a day where it 
stops processing incoming webhooks. We see in our Jenkins logs that it is 
receiving them but they are not triggering the behavior that you would 
expect.

It seems like processes are stuck or waiting behind something that is not 
completing until we restart the server via the UI.  

Another possible symptom of the master being "stuck" is that in the UI 
where it shows the running jobs, jobs that have completed still show as 
running.

Does anyone have any advice that could be helpful?

I'm wondering if we can detect whatever is holding jobs back and manually 
kill it? 

Perhaps we need a large machine for the master to handle our load? We're 
monitoring the machine and the memory seems to stay under 6 gigs.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/78297959-7df4-4f54-8e9c-cbbe086bc338%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to