On Sunday, June 22, 2014 3:58:24 PM UTC-7, Mark Waite wrote:
>
>
> In the "*Configuration Matrix*" section of the job definition, click "*Add 
> Axis*" and "*Elastic Axis*".  I use the axis name "*label*" and then 
> assign the value "*windows, linux*". 
>
> I've already applied the labels "windows" and "linux" to my jenkins 
> slaves.  With that axis definition, that job will now run on all nodes with 
> the label "*windows*" and all nodes with the label "*linux*".
>
> In my specific case, rather than applying the label "linux" interactively 
> to each of my slaves, I use the "*Hudson platformlabeler plugin*" to 
> automatically assign labels based on the slave operating system.  That 
> applies detailed labels to slaves automatically (like amd64-Debian-7.5 or 
> amd64-Debian-testing), then I use the "*Implied Labels Plugin*" to assign 
> the label "linux' to all slaves which are variants of linux.
>

Thank you for the suggestion, Mark. We'll look into the combination of 
those three plugins (*Elastic Axis*, *PlatformLabeler Plugin*, and *Implied 
Labels Plugin*) in order to get Jenkins to simultaneously run a single job 
on all nodes.

 

> I'm not sure I understand your model of creating a queue of 281 items from 
> which jobs "pull".  Are the 281 items to build complete when each of the 
> 281 items has been built on at least one of the slaves, or do you require 
> that all 281 items must be built on each slave?
>
> I've never created a series of dependencies like you describe, so I 
> probably won't be much help defining those types of jobs.
>
> Mark Waite
>

My apologies - that may be because I wasn't precise enough in my original 
message. Allow me to clarify: each of the 281 libraries that comprise our 
22 binaries needs only to be built once. A library can be built by any of 
the 60 to 70 builder nodes, and once it's built, it should not be built 
again by any of the other nodes. By the end of *Stage 2*, a network 
directory accessible by all nodes will contain 281 .lib files. Once *Stage 
2* is complete, then the jobs in *Stage 3* will start. Each of the 22 Stage 
3 jobs will know which .lib files are required to make that particular 
binary, and so the 22 nodes (out of the 60 or 70 in our pool of nodes) that 
end of receiving a job will start copying down the relevant libs from the 
network path and start linking them together to form our resulting 22 
binaries.

To give you some background, our industry is heavily focused on high 
performance computing. Having "job queues" that hundreds of worker nodes 
pull from is a natural way to solve embarrassingly parallel tasks like 
building binaries for organizations like ours and is usually the most 
efficient way to process large computational workloads. When we began 
re-evaluation our build system, we approached it in the same way we 
approach the problems our tools are designed to solve: trying to best 
utilize large pools or clusters of hardware resources in the most 
efficient, hardware fault-tolerant, and distributed way.

You mention that you've never created a series of dependencies like the 
above. Would you mind describing the kinds of builds you use Jenkins for? I 
can't imagine very many different ways to build software, so it'd be 
helpful to hear how you and others use Jenkins day to day.

Thank you,
-JJ

 

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to