[JIRA] (JENKINS-56491) overeager provisioning of k8s nodes

2019-04-24 Thread jenkins...@carlossanchez.eu (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Carlos Sanchez closed an issue as Duplicate  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-56491  
 
 
  overeager provisioning of k8s nodes   
 

  
 
 
 
 

 
Change By: 
 Carlos Sanchez  
 
 
Status: 
 Open Closed  
 
 
Resolution: 
 Duplicate  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-56491) overeager provisioning of k8s nodes

2019-03-22 Thread denny.ak...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Denny Ho updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-56491  
 
 
  overeager provisioning of k8s nodes   
 

  
 
 
 
 

 
Change By: 
 Denny Ho  
 

  
 
 
 
 

 
 - Recently upgraded from kubernetes plugin 1.14.3 -> 1.14.8, so not sure which version the change was introduced. -   Tested and found that this behavior was introduced with 1.14.6 Seeing an overeagerness with node provisioning with k8s nodes. Happens most often when there are already existing k8s nodes jobs running to repro # Create 3-4 jobs running with a k8s node label. I just used sleep 600 in my build # start one of the jobs # while that build is running, start the other jobs # observe that more nodes will spin up than necessary and so some will sit idle before being terminated # observe in /var/logs/jenkins/jenkins.log messages like this, negative workload: {code:java}// INFO: Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: -0.183{code}  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-56491) overeager provisioning of k8s nodes

2019-03-22 Thread denny.ak...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Denny Ho updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-56491  
 
 
  overeager provisioning of k8s nodes   
 

  
 
 
 
 

 
Change By: 
 Denny Ho  
 
 
Environment: 
 Behavior seen with kubernetes plugin 1.14. 8 6 tested on Jenkins vers 2.150.2 and 2.150.3the service is running on an ubuntu 16.04 machine  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v7.11.2#711002-sha1:fdc329d)  
 

  
 

   





-- 
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[JIRA] (JENKINS-56491) overeager provisioning of k8s nodes

2019-03-08 Thread denny.ak...@gmail.com (JIRA)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Denny Ho created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Jenkins /  JENKINS-56491  
 
 
  overeager provisioning of k8s nodes   
 

  
 
 
 
 

 
Issue Type: 
  Bug  
 
 
Assignee: 
 Carlos Sanchez  
 
 
Components: 
 kubernetes-plugin  
 
 
Created: 
 2019-03-09 00:26  
 
 
Environment: 
 Behavior seen with kubernetes plugin 1.14.8  tested on Jenkins vers 2.150.2 and 2.150.3  the service is running on an ubuntu 16.04 machine  
 
 
Priority: 
  Minor  
 
 
Reporter: 
 Denny Ho  
 

  
 
 
 
 

 
 Recently upgraded from kubernetes plugin 1.14.3 -> 1.14.8, so not sure which version the change was introduced.   Seeing an overeagerness with node provisioning with k8s nodes. Happens most often when there are already existing k8s nodes jobs running   to repro 
 
Create 3-4 jobs running with a k8s node label. I just used sleep 600 in my build 
start one of the jobs 
while that build is running, start the other jobs 
observe that more nodes will spin up than necessary and so some will sit idle before being terminated 
observe in /var/logs/jenkins/jenkins.log messages like this, negative workload:  

 

// INFO: Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: -0.183