Alex,

IF your builds are doing a lot of disk reading/writing and you're running
multiple builds in parallel, you might be limited in build performance by
how fast your hardware can read/write to disk.  If multiple builds are all
doing a bunch of disk activity at the same time, you may be maxing out the
bandwidth of your disk system, slowing things down.  However, until you
know where your performance bottlenecks are you won't be able to accurately
make changes to increase performance.

In a different analogy, you can put nice big fat tires on a little car with
a little engine, but unless you know that the tires were the limiting
factor in your car's performance before your tire upgrade, you may be
throwing money/time/resources at improvements that really aren't going to
make any difference in the results.

What I suggest you do (and has been alluded to by others) is to really sit
down (with your build engineers) and work through your build metrics and
performance stats, and details of process usage of each piece of the build,
start to finish and analyze where the performance bottlenecks are. Then
look to put your time/money/resources in places where the most improvements
can realistically be made.  Sometimes throwing better hardware at a build
performance issue will help, but more often it will just be small
percentages of increase.  To do it right, it will take some time to analyze
what's being done now, and figure out how to streamline things.  Also make
sure that your Jenkins VM system isn't being throttled by other VM's
running on the same host which are sucking up resources of disk
performance, memory, or cpu cycles.

Out of curiosity, in your existing Jenkins system when a build is running,
is it utilizing all 3 of your slaves, or are 2 idle and 1 is doing all the
work?  Unless you specifically structure your build to do multiple
sub-build pieces in parallel, Jenkins won't magically partition the work
across all 3 slaves by itself.

Just my opinion, but I think your search for a magical VM tweak to double
your build performance is going to be mostly futile, and if your goal is to
get your build done in an hour then significant build changes will be
needed to reach that goal.

Scott

On Thu, Sep 18, 2014 at 9:55 AM, Alex Demitri <[email protected]>
wrote:

> Also, i received suggestions to move to SSD and contention of disk
> resources.. How are disk resources affecting the time of the build?
>
> Alex
>
>
> On Thursday, September 18, 2014 6:48:35 AM UTC-7, Alex Demitri wrote:
>>
>> Hosts are dedicated to these VMs. Hosts are configured as 20 vCPUs / 384
>> gb ram per host. There are three hosts in the cluster. Only VMs living on
>> the cluster are the jenkins slaves (3 vms) - jenkins master (1 vm).
>>
>> Alex
>>
>>
>> On Wednesday, September 17, 2014 10:36:13 PM UTC-7, LesMikesell wrote:
>>>
>>> On Wed, Sep 17, 2014 at 3:24 PM, Alex Demitri <[email protected]>
>>> wrote:
>>> >
>>> > I have been trying to tweak more and more the Jenkins slave setup. Now
>>> they
>>> > are configured as 16vCPUs + 48GB of RAM per slave.
>>>
>>> How does this relate to the physical host's available resources?   Are
>>> you overcommitted with the number of active VMs sharing the host?
>>> Likewise, what else is contending for any disk resources that might be
>>> shared?
>>>
>>> --
>>>   Les Mikesell
>>>      [email protected]
>>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to