[
https://issues.jenkins-ci.org/browse/JENKINS-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=163033#comment-163033
]
Jason Swager commented on JENKINS-13735:
----------------------------------------
I believe that I have a fix for this, but being new to git and even more to
Jenkins core programming, I'll just submit the patch (hopefully did that right)
as part of this comment. The patch should address a flaw in the code logic
where a slave that cannot handle a build request is started. The very minor
change is to add one additional check to make sure that the slave CAN handle
the request before flagging it to be startable.
core/src/main/java/hudson/slaves/RetentionStrategy.java | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/core/src/main/java/hudson/slaves/RetentionStrategy.java
b/core/src/main/java/hudson/slaves/RetentionStrategy.java
index 02611e5..f007ac6 100644
--- a/core/src/main/java/hudson/slaves/RetentionStrategy.java
+++ b/core/src/main/java/hudson/slaves/RetentionStrategy.java
@@ -218,7 +218,7 @@ public abstract class RetentionStrategy<T extends Computer>
extends AbstractDesc
}
}
- if (needExecutor) {
+ if (needExecutor && (c.getNode().canTake(item) == null)) {
demandMilliseconds = System.currentTimeMillis() -
item.buildableStartMilliseconds;
needComputer = demandMilliseconds > inDemandDelay *
1000 * 60 /*MINS->MILLIS*/;
break;
> Jenkins starts wrong slave for job restricted to specific one
> -------------------------------------------------------------
>
> Key: JENKINS-13735
> URL: https://issues.jenkins-ci.org/browse/JENKINS-13735
> Project: Jenkins
> Issue Type: Bug
> Components: master-slave, slave-setup, vsphere-cloud
> Affects Versions: current
> Environment: Jenkins 1.463 under Tomcat6 on Linux (SLES 11), Windows
> XP slave VMs controlled via vSphere Cloud plugin
> Reporter: Marco Lehnort
> Assignee: Kohsuke Kawaguchi
> Labels: slave
>
> I'm using the following setup:
> - WinXP slaves A,B,C
> - jobs jA, jB, jC, tied to slaves A,B,C respectively using "Restrict where
> this job can run"
> Assume all slaves are disconnected and powered off, no builds are queued.
> When starting a build manually, say jC, the following will happen:
> - job jC will be scheduled and also displayed accordingly in the build queue
> - tooltip will say it's waiting because slave C is offline
> - next, slave A is powered on by Jenkins and connection is established
> - jC will not be started, Jenkins seems to honor the restriction correctly
> - after some idle time, Jenkins realizes the slave is idle and causes shut
> down
> - then, same procedure happens with slave B
> - on occasion, next one is slave A again
> - finally (on good luck?) slave C happens to be started
> - jC is executed
> It is possible that jC is waiting for hours (indefinitely?), because the
> required
> slave is not powered on. I also observed this behaviour using a time-trigger
> instead of manual trigger, so I assume it is independent of the type of
> trigger.
> Occasionally it also happens that the correct slave is powered up right away,
> but that seems to happen by chance. The concrete pattern is not obvious to me.
> Note that the component selection above is just my best guess.
> Cheers, Marco
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.jenkins-ci.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira