[JIRA] (JENKINS-36920) Docker: Standard slaves also counted into container cap

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-36920  
 
 
  Docker: Standard slaves also counted into container cap
 

  
 
 
 
 

 
Issue Type: 
  Bug  
 
 
Assignee: 
 magnayn  
 
 
Components: 
 docker-plugin  
 
 
Created: 
 2016/Jul/25 3:16 PM  
 
 
Environment: 
 SUSE Linux Enterprise 12 SP1  jdk1.8.0_60  Jenkins ver. 1.635  apache tomcat 7.0.65  docker-plugin 0.16.0  
 
 
Priority: 
  Minor  
 
 
Reporter: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 Hi folks, Setup a Jenkins with docker-plugin. Add a classical (non-dockered) slave to the machine. Create a docker configuration, setting the Container Cap to 1 (i.e. only one single container may be run). Create a template, add a label to it. Create two small jobs (simply doing nothing) and assign them to the label. Start the jobs. Observed behaviour: No docker slave is started; they are queued. Expected behaviour: At least one docker slave is started, processing the first one of the two jobs. Adjust the Container Cap to 2. Observed behaviour: a docker slave is started.  If you repeat the entire setup above, leaving out the classical slave, you will notice that also with Container Cap of 1 the docker slave is started. (similar to JENKINS-26388)  
 

  
 
 
 
 

 
 
 

 
  

[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 it might be helpful to set the strategy to "Docker Cloud Rretention Strategy" with an idle timeout of 1 min. With that it might be easier to see (for example to separate the two different types of templates by settings the "# of executors" to 2)  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-36919  
 
 
  Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
Issue Type: 
  Bug  
 
 
Assignee: 
 magnayn  
 
 
Components: 
 docker-plugin  
 
 
Created: 
 2016/Jul/25 3:06 PM  
 
 
Environment: 
 SUSE Linux Enterprise 12 SP1  jdk1.8.0_60  Jenkins ver. 1.635  apache tomcat 7.0.65  docker-plugin 0.16.0  
 
 
Priority: 
  Major  
 
 
Reporter: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 Hi folks, after some detailed investigations I am quite sure that we have an issue with counting the running slaves on the docker-plugin with multiple templates. Here's how you may reproduce it: 
 
Setup a Jenkins with docker-plugin 
Configure a cloud type 'docker', put the Capacity limit to a high value (let's say 50 or so) 
Configure two templates: A and B. 
Set for template A to accept label A 
Set for template B to accept label B 
Use the same image (some simple image) 
Set instance limit of template A to 5 
Set instance limit of template B to 2 
Create 10 jobs assigned to label A, implementation "sleep 60" 
Create 5 jobs assigned to label B, implementation "sleep 60" 
Start all the jobs of 

[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 Setting up a second jenkins server + association the same docker host as the first one to it reveils another interesting aspect to this issue. Scenario: 
 
Configure the second instance to make use of the same image as the first Jenkins server (label A)..Set the Instance Capacity to 5. 
On the first jenkins start the jobs with label A, creating 5 instances of the image on the docker server (instance capacity accordingly set). 
On the second Jenkins start the dummy jobs for label A. Observe that no slave is started! 
Wait until the jobs on the first servers are finished; observe that now the second Jenkins server may start instances. 
 This means that the instance capacity not just is "across templates", but bound only to the image name – even across Jenkins servers (Apparently the docker server is queried, asking how many instances of a certain image is running - not cross-checked with the list of instances the local Jenkins server had started before).  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving 

[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 Here's a variant of the same issue mentioned above, which should be simpler to reproduce: 
 
Configure a Cloud Docker host, Container cap high enough that it will not be considered (i.e. 10 or so). 
Configure a template with an image that Jenkins can start, instance limit: 1, assign a label to it (for example: "docker") 
Create a job with some dummy build step, associate it to the label. 
Build the job and observe that it builds fine. 
Go to your command line and start a container of the same image. Keep the container running. 
Build the same job again. Observe that no container will be provisioned, but the following message can be read in the system log: 
 
 
Jul 30, 2016 9:10:30 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud provision Will provision 'jenkins-1', for label: 'docker', in cloud: 'dummy' Jul 30, 2016 9:10:30 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud addProvisionedSlave Not Provisioning 'jenkins-1'. Instance limit of '1' reached on server 'dummy'
 (here in this case, the image of the configuration was "jenkins-1", the label was "docker" and the name of the cloud I configured was "dummy"). Apparently, the already running container is considered to be executed by the Jenkins, which is not the case. Thus, all containers running based on a certain image is counting into the "instance limit". Why is this a problem to my mind? If two Jenkins servers share the same Docker host, they may not make use of the same image, as execution then is no longer predicable. Thus, "image sharing" is not possible (we may now start to argue, whether this is a bug or a feature – in my case it's a devastating effect, as I will be using a Docker Swarm cluster with up to 10 Jenkins servers attached...)  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
   

[JIRA] (JENKINS-36920) Docker: Standard slaves also counted into container cap

2016-07-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36920  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker: Standard slaves also counted into container cap
 

  
 
 
 
 

 
 Error description is unprecise - correct statement is: 
 
The Container Cap simply counts the number of instances running on the docker host 
Other (non-docker) slaves are not counted 
 However, this still makes running a "shared docker host" scenario complicated, as it needs to be ensured that on all attached Jenkins servers, the Container Cap needs to be in sync (otherwise those Jenkins installations which have a lower Container Cap than the currently running number of containers on the docker host will not provision any new slave anymore). Thus, closing this ticket.  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36920) Docker: Standard slaves also counted into container cap

2016-07-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl closed an issue as Cannot Reproduce  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-36920  
 
 
  Docker: Standard slaves also counted into container cap
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Status: 
 Open Closed  
 
 
Assignee: 
 magnayn Nico Schmoigl  
 
 
Resolution: 
 Cannot Reproduce  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 Here's an attempt of a workaround: 
 
create a second tag (with another name) for the same image (i.e. two tags sharing the same hash key of the image) 
use that "second name" for the same image in the template of the docker-plugin configuration 
 Instance capacity then is - at least - considered per template (but still could be cannibalized by other clients running containers using the same images)  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-36919  
 
 
  Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 Hi folks,after some detailed investigations I am quite sure that we have an issue with counting the running slaves on the docker-plugin with multiple templates. Here's how you may reproduce it:# Setup a Jenkins with docker-plugin# Configure a cloud type 'docker', put the Capacity limit to a high value (let's say 50 or so)# Configure two templates: A and B. # Set for template A to accept label A# Set for template B to accept label B# Use the same image (some simple image)# Set instance limit of template A to 5# Set instance limit of template B to 2# Create 10 jobs assigned to label A, implementation "sleep 60"# Create 5 jobs assigned to label B, implementation "sleep 60"# Start all the jobs of label A at once (you may run a small groovy script for that)# Wait 10s# Start all the jobs of label B at once (you may run a small groovy script for that)What you will observe is the following:* The jobs of label A will request new slaves up to the instance limit.* Jobs with label B will remain in the queue. No new slaves are created* The Jobs with label B will be executed, once all the jobs in label A have been completed (and the slaves are taken offline again)If you look into the system log you will read the following messages:{quote}Asked to provision 23 slave(s) for: labelBJul 25, 2016 4:46:52 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud provisionWill provision 'image', for label: 'labelB', in cloud: 'docker'Jul 25, 2016 4:46:52 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud addProvisionedSlaveNot Provisioning 'labelB'. Instance limit of '2' reached on server 'docker'{quote}Please note that during that error message, not a single slave of the second template was up and running (however, 5 of the first template were up).Repeat the same activity with setting the instance limit of template B to 6. Repeat the same kind of load. You will observe that exactly *one* slave of template B will be created. Alas: The instance limits of two different templates are not counted separately  (which is what the configuration UI suggests) .Impact: Though capacity is available on the docker server, the different loads are not executed in parallel. PS: I also tried to configure a second cloud provider (of type docker), thus separating the templates into two sections. However, this did not change the situation either: Apparently, the "instances used" are counted per URL and not per template...Thanks for checking!  
 

  
 
 
 
 

 
 

[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 root cause of it all seems to be at com.nirima.jenkins.plugins.docker.DockerCloud.countCurrentDockerSlaves()  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-07-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 For a proposal how to fix this issue, see also https://github.com/jenkinsci/docker-plugin/pull/409  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-37526) Docker Container Cap blanking stops execution

2017-02-16 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-37526  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Container Cap blanking stops execution   
 

  
 
 
 
 

 
 yep, +1 and closed  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-37526) Docker Container Cap blanking stops execution

2017-02-16 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl closed an issue as Fixed  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-37526  
 
 
  Docker Container Cap blanking stops execution   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Status: 
 Open Closed  
 
 
Resolution: 
 Fixed  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-37526) Docker Container Cap blanking stops execution

2016-08-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-37526  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Container Cap blanking stops execution   
 

  
 
 
 
 

 
 For a possible solution see also https://github.com/jenkinsci/docker-plugin/pull/425  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-36919) Docker Instance Capacity counted across templates

2016-08-30 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl edited a comment on  JENKINS-36919  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Instance Capacity counted across templates   
 

  
 
 
 
 

 
 Here's a variant of the same issue mentioned above, which should be simpler to reproduce:* Configure a Cloud Docker host, Container cap high enough that it will not be considered (i.e. 10 or so).* Configure a template with an image that Jenkins can start, instance limit: 1, assign a label to it (for example: "docker")* Create a job with some dummy build step, associate it to the label.* Build the job and observe that it builds fine.* Go to your command line and start a container of the *same* image. Keep the container running.* Build the same job again. Observe that no container will be provisioned, but the following message can be read in the system log:{quote}Jul 30, 2016 9:10:30 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud provisionWill provision 'jenkins-1', for label: 'docker', in cloud: 'dummy'Jul 30, 2016 9:10:30 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud addProvisionedSlaveNot Provisioning 'jenkins-1'. Instance limit of '1' reached on server 'dummy'{quote}(here in this case, the image of the configuration was "jenkins-1", the label was "docker" and the name of the cloud I configured was "dummy").Apparently, the already running container is considered to be executed by the Jenkins, which is not the case. Thus, all containers running based on a certain image is counting into the "instance limit".Why is this a problem to my mind? If two Jenkins servers share the same Docker host, they may not make use of the same image, as execution then is no longer  predicable  predictable . Thus, "image sharing" is not possible (we may now start to argue, whether this is a bug or a feature -- in my case it's a devastating effect, as I will be using a Docker Swarm cluster with up to 10 Jenkins servers attached...)  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   

[JIRA] (JENKINS-37526) Docker Container Cap blanking stops execution

2016-08-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-37526  
 
 
  Docker Container Cap blanking stops execution   
 

  
 
 
 
 

 
Issue Type: 
  Bug  
 
 
Assignee: 
 Nico Schmoigl  
 
 
Components: 
 docker-plugin  
 
 
Created: 
 2016/Aug/18 9:10 PM  
 
 
Priority: 
  Minor  
 
 
Reporter: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 Setting the Container Cap for a Docker Cloud to "blank" defaults it back to the value zero. However, the coding currently then plays the game hard by not provisioning any container to the docker server anymore. On the other hand, the documentation says:  
 
Defaults to blank which disables this limit.
  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  

[JIRA] (JENKINS-37526) Docker Container Cap blanking stops execution

2016-08-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-37526  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Docker Container Cap blanking stops execution   
 

  
 
 
 
 

 
 The following message is being logged: 
 
Not Provisioning 'jenkins-1'; Server 'dummy' full with '0' container(s)
  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian JIRA (v7.1.7#71011-sha1:2526d7c)  
 
 

 
   
 

  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-10-13 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-52362  
 
 
  Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Attachment: 
 20181012-statebefore.txt  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-10-13 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Short update from our case: 
 
Yesterday we had another case where we almost stumbled into this situation again.  
Situation was that one user had caused many requests to the main page of a job (called "JenkinsJob102" later on) where the complex graphs (jacoco/test coverage/...) are rendered. We again stumbled into the situation that the HTTP request threads were hanging for a long time.  
CPU load was up to 500% (i.e. 5 cores were busy). Note that still some CPU capacity was free. 
I/O load apparently was not the biggest problem (otherwise we would have seen a different setting on "top"). 
Jenkins stopped job processing. HTTP response time was in the area of 30s and longer. 
We did not see the error log message "INFO: Running CpsFlowExecutionOwner[...] unresponsive for 5 sec" (or similar) yet, but the execution of Pipeline jobs seized. So I would have expected that this was to happen very soon.  
 I attached a thread dump for you to this ticket (20181012-statebefore.txt). We detected two culprits: 
 
Lock 0xd7102070 was the culprit that the GET requests started to queue again (search for "#3231" in the thread dump file). All hanging GET HTTP requests threads were against JenkinsJob102. We first killed the thread which has "#3231" in its name, as it was the current owner of the lock. CPU shortly dropped, but all the rest of the other threads kicked in. We then manually killed also the rest of the threads, as we were very confident that these requests were leftovers which no user ever would require anymore. That took roughly 15 minutes to be done, as performance of Jenkins was bad.  Once they were gone, CPU load was at around our usual 20%. Yet, the job queue was not processing anymore. 
Taking another thread dump snapshot (which unfortunately I lost shortly thereafter), we then detected that Yet-Another-Dockerplugin (YAD) was waiting for a response from our docker server again. It had a lock on the method "getClient()" and thus other threads for provisioning new slaves could not gain the lock (nearly all our jobs in the queue require a docker-based slave in one or the other way). Having cross-checked with the docker server (which was not expecting to send anything anymore), we then also killed that thread which was waiting for the answer from the docker server, which would never come. With that also provisioning of slaves resumed and the job queue started to reduce. 
 Having the Jenkins server in good shape again, we dared to try reproducing the situation: One user logged on and opened four browser tabs pointing to the main page of job "JenkinsJob102". He then did a single "browser refresh (F5)" for each of these tabs. CPU load then almost immediately was up to 500% again and we had roughly a dozen of hanging GET request threads (note though that the number of "hanging threads" was much lower than I 

[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-11-12 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 After a longer period of not having experienced the problem, we just encountered it again. I could take a jstack snapshot and I found the following interesting thread stack trace: 

 
"Running CpsFlowExecution[Owner[apppname1/appname2/appname3/appname4/134675:appname1/appname2/appname3/appname4 #134675]]" #164285 daemon prio=5 os_prio=0 tid=0x7f668d046000 nid=0x2b16 waiting on condition [0x7f65db018000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xd6edcc30> (a org.codehaus.groovy.reflection.GroovyClassValuePreJava7$GroovyClassValuePreJava7Segment)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at org.codehaus.groovy.util.LockableObject.lock(LockableObject.java:37)
at org.codehaus.groovy.util.AbstractConcurrentMapBase$Segment.removeEntry(AbstractConcurrentMapBase.java:173)
at org.codehaus.groovy.util.ManagedConcurrentMap$Entry.finalizeReference(ManagedConcurrentMap.java:81)
at org.codehaus.groovy.util.ManagedConcurrentMap$EntryWithValue.finalizeReference(ManagedConcurrentMap.java:115)
at org.codehaus.groovy.reflection.GroovyClassValuePreJava7$EntryWithValue.finalizeReference(GroovyClassValuePreJava7.java:51)
at org.codehaus.groovy.util.ReferenceManager$CallBackedManager.removeStallEntries0(ReferenceManager.java:108)
at org.codehaus.groovy.util.ReferenceManager$CallBackedManager.removeStallEntries(ReferenceManager.java:93)
at org.codehaus.groovy.util.ReferenceManager$CallBackedManager.afterReferenceCreation(ReferenceManager.java:117)
at org.codehaus.groovy.util.ReferenceManager$1.afterReferenceCreation(ReferenceManager.java:135)
at org.codehaus.groovy.util.ManagedReference.(ManagedReference.java:36)
at org.codehaus.groovy.util.ManagedReference.(ManagedReference.java:40)
at org.codehaus.groovy.util.ManagedLinkedList$Element.(ManagedLinkedList.java:40)
at org.codehaus.groovy.util.ManagedLinkedList.add(ManagedLinkedList.java:102)
at org.codehaus.groovy.reflection.ClassInfo$GlobalClassSet.add(ClassInfo.java:478)
- locked <0xd6e6aa68> (a org.codehaus.groovy.util.ManagedLinkedList)
at org.codehaus.groovy.reflection.ClassInfo$1.computeValue(ClassInfo.java:83)
at org.codehaus.groovy.reflection.ClassInfo$1.computeValue(ClassInfo.java:79)
at org.codehaus.groovy.reflection.GroovyClassValuePreJava7$EntryWithValue.(GroovyClassValuePreJava7.java:37)
at org.codehaus.groovy.reflection.GroovyClassValuePreJava7$GroovyClassValuePreJava7Segment.createEntry(GroovyClassValuePreJava7.java:64)
at org.codehaus.groovy.reflection.GroovyClassValuePreJava7$GroovyClassValuePreJava7Segment.createEntry(GroovyClassValuePreJava7.java:55)
at org.codehaus.groovy.util.AbstractConcurrentMap$Segment.put(AbstractConcurrentMap.java:157)
at 

[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-19 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Just for the record: At the same server, we experienced a deadlock situation today, which could be related to this issue: Today, we had a very sluggish server (long latency in response). Checking, we found several hanging inbound GET requests which took ages (>1,100,000 ms) to complete. A thread dump showed that several threads were blocked by a lock in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:369), which was indirectly triggered by hudson.plugins.performance.actions.PerformanceProjectAction.doRespondingTimeGraph. Note that there was not just the performance plugin, but we also saw other GET requests, such as /job/.../test/trend (there also locked in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber) or /job/.../jacoco/graph (also in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber) were affected.  After a little of analysis, we found one of the /job/.../performance/throughputGraph jobs running (state "running"), which apparently was in an endless loop. It also held the lock of the critical monitor, which blocked all the other requests. The interesting (triggering) block within this thread to me was:  

 
...
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1189)
at hudson.util.XStream2.unmarshal(XStream2.java:114)
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1173)
at hudson.XmlFile.unmarshal(XmlFile.java:160)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.reload(WorkflowRun.java:603)
at hudson.model.Run.(Run.java:325)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.(WorkflowRun.java:209)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at jenkins.model.lazy.LazyBuildMixIn.loadBuild(LazyBuildMixIn.java:165)
at jenkins.model.lazy.LazyBuildMixIn$1.create(LazyBuildMixIn.java:142)
 

 Killing the thread did the trick - and the rest started to work again. Afterwards, we had to restart the server - but that was due to another problem, which is unrelated to this one here. However, the situation today was different than before: Today, we had a significant load average / CPU load during that situation. In the previous situation, load average / CPU load was very normal - also for hours before the blocking event.  Given that the symptoms are different, I am currently not sure whether we just saw the "early stage" of yet-another occurrence of this issue, which we could cure with a courageous thread kill, or whether this was something totally different. For sure, it makes sense to closely look at the list of locks pending, if the issue reappears.  
 

  
 
 
 
 

 
 

[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-19 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl edited a comment on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Just for the record: At the same server, we experienced a deadlock situation today, which  _could_  _m__ay_   be related to this issue:Today, we had a very sluggish server (long latency in response). Checking, we found several hanging inbound GET requests which took ages (>1,100,000 ms) to complete. A thread dump showed that several threads were blocked by a lock in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:369), which was indirectly triggered by hudson.plugins.performance.actions.PerformanceProjectAction.doRespondingTimeGraph. Note that there was not just the performance plugin, but we also saw other GET requests, such as /job/.../test/trend (there also locked in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber) or /job/.../jacoco/graph (also in jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber) were affected. After a little of analysis, we found one of the /job/.../performance/throughputGraph jobs running (state "running"), which apparently was in an endless loop. It also held the lock of the critical monitor, which blocked all the other requests. The interesting (triggering) block within this thread to me was: {noformat}...at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1189)at hudson.util.XStream2.unmarshal(XStream2.java:114)at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1173)at hudson.XmlFile.unmarshal(XmlFile.java:160)at org.jenkinsci.plugins.workflow.job.WorkflowRun.reload(WorkflowRun.java:603)at hudson.model.Run.(Run.java:325)at org.jenkinsci.plugins.workflow.job.WorkflowRun.(WorkflowRun.java:209)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at jenkins.model.lazy.LazyBuildMixIn.loadBuild(LazyBuildMixIn.java:165)at jenkins.model.lazy.LazyBuildMixIn$1.create(LazyBuildMixIn.java:142){noformat}Killing the thread did the trick - and the rest started to work again. Afterwards, we had to restart the server - but that was due to another problem, which is unrelated to this one here.However, the situation today was different than before: Today, we had a significant load average / CPU load during that situation. In the previous situation, load average / CPU load was very normal - also for hours before the blocking event. Given that the symptoms are different, I am currently not sure whether we just saw the "early stage" of yet-another occurrence of this issue, which we could cure with a courageous thread kill, or whether this was something totally different. For sure, it makes sense to closely look at the list of locks pending, if the issue reappears.  
 

  
 
 
 
 

 
 
 

 
 
  

[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Intermediate feedback: So far, the problem did not reappear for our server. The only thing which we have changed (after the issue I had documented in https://issues.jenkins-ci.org/browse/JENKINS-52362?focusedCommentId=349554=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-349554 ) was the "read timeout" setting in the YAD configuration section from "empty" to 120.  I will update this ticket in case we experience yet another situation where the server stops responding.  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-52362  
 
 
  Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Attachment: 
 20180919-hangingjenkinsthreads-logs.txt  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-25 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 

We did some improvements over the last few releases of workflow-cps and workflow-job
 That sounds promising. We'll have a look at this some time later. 

Now, what I'd love to see is the full stack trace for that "endless loop".
 Before going into detail here, please let me reiterate one thing: It is not proven to me that the "endless loop" case really was an "early version" of the original bug report here. It has happened on the same server at roughly the same time. Having said this, let's have a look at the thread trace. I have attached an anonymized version of the thread trace I had created when the system was in that "endless loop" state (20180919-hangingjenkinsthreads-logs.txt). I suggest to start your analysis with searching for the term "main-voter", which is one of our jobs - and based on my analysis is the job, which caused the situation.  Although we enabled quite strict retention on that job, we still have ~250 builds with it. Moreover, expect that each (successful) build will have around 1600 (mostly very small) log files in the build's folder (BTW they give us also a hard time with our backup strategy). 

Also, what is "YAD plugin" short for? "Yet Another Docker plugin"?
 Yes, correct. 

If so, I'm very curious how that could be related because the timeout there applies to communications with the Docker server.
 Well, this is yet another guess of us - here's the story: Remember that I had written 

Afterwards, we had to restart the server
 The reason for that was the thread "jenkins.util.Timer 6". If you look at its stack, you'll see that it's blocked in a ListContainersCmdExec request. That's a call via HTTP REST to the docker server, asking for the list of all containers running on the host (mainly - it's a little more complicated than this  ). With an additional tool, we found out that it must be hanging there for hours (so much on "read timeout" - empty setting there means "infinity"). It's in blocked I/O state, waiting for the result coming back from the docker host.  We don't exactly know what the docker host did (replied?), but usually such calls only take 1-2 seconds to answer - on very busy hosts it may be up to half a minute or so. You may expect that our docker host should respond within less of a second. Apparently, the missing response had blocked the YAD plugin and no further containers could be created (which mainly meant that the build queue was blocked, as nearly all our jobs require a new container). We could observe that this also had a negative effect on the management of already running containers/nodes for currently-running jobs (containers were hanging strangely). It wouldn't surprise me, if that also had indirect bad 

[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2019-01-02 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Damien Roche, we had similar issues before - and I fear that this is only loosely related to this issue here. In my case, we are not using EC2 cloud but the docker cloud plugin (YAD). Root cause for us was that the plugin did not have a connection timeout configured. If then a connection to the cloud manager fails, the thread is waiting eternally for an answer. For the sake of consistency, however, the thread aquired a lock, which then is never released... and all the blues started... That is why I would suggest you to have a look at your timeout values (I don't know whether they are configurable in the case of EC2) - and if applicable post them here for further cross-checking. If they are too high, you should fix that first.  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 Similar issue also reproducible here on an older machine. Setup (amongst others): 
 
Jenkins core 2.60.3 
Yet Another Docker Plugin 0.1.0-rc47 
Jenkins running in docker container, connecting to another docker server for running jobs on the slave. 
 System locks up, but apparently continues running internally. The HTTP server can't be down entirely, as sending a GET to the endpoint /api/json (which we use for "availability pinging") kept responding at usual response times. Jenkins runs jobs, which are executed every 5 minutes, so we can track down the point in time of it quite well.  I could cross-check: There was more than 120MB of free heap memory for the java process + further 3GB of RAM.  Operating system and docker logs around that time seem very unsuspicious.   
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl assigned an issue to Nico Schmoigl  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-52362  
 
 
  Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Assignee: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl assigned an issue to Unassigned  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-52362  
 
 
  Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
Change By: 
 Nico Schmoigl  
 
 
Assignee: 
 Nico Schmoigl  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-52362) Jenkins hangs due to "Running CpsFlowExecution unresponsive"

2018-09-18 Thread n...@schmoigl-online.de (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Nico Schmoigl commented on  JENKINS-52362  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Jenkins hangs due to "Running CpsFlowExecution unresponsive"   
 

  
 
 
 
 

 
 

Please could you attach a thread dump from Jenkins?
 ... will try to when the server goes down next time.  It might become a little tricky, as this is a productive instance and if it is down, pressure is often high to get it back up running again as soon as possible.  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.