Hi Guy,

I think this is a matter of your usage of Jenkins. It is rather a
matter of experience and some empiric facts.
You would have to run some specs on a single slave reflecting the
average daily tasks. Then run some calculations to estimate the total
possible amount of slaves.
Some jobs do hardly use memory or utilize the CPU others blows your
ressources.
I my case is obfuscation/encryption of the bundles just killing my
ressources while building normal software bundles is not that
challenging.
This is still just guessing but brings you closer.

You can even improve the amount of slaves/performance when working
with a well calculated time management/slicing.
Your calculations reflect the average utilization of one job and you
calculate it to the limit but that means that ALL jobs run at once
which isn't likely.
Jobs do in general utilize certain ressources in diffrent build stages
while others idle and vice versa.
e.g. queueing jobs in a manner that high utilization of the harddrive
is running sub-sequently instead of all at once can bring you a great
deal.
Same for network I/O, CPU and memory.

I have set the nodes up in a way that jobs on a machine always will
run with a delay of 20 minutes after one already is running. That way
is the checkout which stresses the harddisc and the network connection
back to normal when 2nd jobs spawns.

Hope that helped
Jan

On 19 Apr., 12:44, Guy <[email protected]> wrote:
> I am setting up a new jenkins master
> It will just be a master with NO jobs running on it (even as its own slave).
>
> It has the following specs
> It is a VM running QEMU Virtual CPU version 0.9.1
> Running Red Hat Enterprise Linux Server release 5.6 (Tikanga)
>
> 4 core 3ghz
> 6G ram
> 250G HD (But we archive artifacts to a nexus server)
>
> How many slave jobs could i run concurrently from this master?
> I have a feeling lots but I would like a ball park figure as my boss is
> asking.
>
> Thanks in advance.

Reply via email to