On Wed, May 11, 2016 at 12:34 PM, Eyal Edri <[email protected]> wrote:

> From what I saw, it was mostly ovirt-engine and vdsm jobs pending on the
> queue while other slaves are idle.
> we have over 40 slaves and we're about to add more, so I don't think that
> will be an issue and IMO 3 per job is not enough, especially if you get
> idle slaves.
>
>
+1 on raising then.



> We are thinking on a more dynamic approach of dynamic vm allocation on
> demand, so in the long run we'll have more control over it,
> for now i'm monitoring the queue size and slaves on a regular basis [1],
> so if anything will get blocked too much time we'll act and adjust
> accordingly.
>
>
> [1] http://graphite.phx.ovirt.org/dashboard/db/jenkins-monitoring
>
> On Wed, May 11, 2016 at 1:10 PM, Sandro Bonazzola <[email protected]>
> wrote:
>
>>
>>
>> On Tue, May 10, 2016 at 1:01 PM, Eyal Edri <[email protected]> wrote:
>>
>>> Shlomi,
>>> Can you submit a patch to increase the limit to 6 for (i think all jobs
>>> are using the same yaml template) and we'll continue to monitor to queue
>>> and see if there is an improvement in the utilization of slaves?
>>>
>>
>> Issue was that long lasting jobs caused queue to increase too much.
>> Example: a patch set rebased on master and merged will cause triggering
>> of check-merged jobs, upgrade jobs, ...; running 6 instance of each of them
>> will cause all other projects to be queued for a lot of time.
>>
>>
>>
>>>
>>> E.
>>>
>>> On Tue, May 10, 2016 at 1:58 PM, David Caro <[email protected]> wrote:
>>>
>>>> On 05/10 13:54, Eyal Edri wrote:
>>>> > Is there any reason we're limiting the amount of check patch & check
>>>> merged
>>>> > jobs to run only 3 in parallel?
>>>> >
>>>>
>>>> We had some mess in the past where enabling parallel runs did not
>>>> really force
>>>> not using the same slave at the same time, I guess we never reenabled
>>>> them.
>>>>
>>>> > Each jobs runs in mock and on its own VM, anything presenting us from
>>>> > removing this limitation so we won't have idle slaves while other
>>>> jobs are
>>>> > in the queue?
>>>> >
>>>> > We can increase it at least to a higher level if we won't one
>>>> specific job
>>>> > to take over all slaves and starve other jobs, but i think
>>>> ovirt-engine
>>>> > jobs are probably the biggest consumer of ci, so the threshold should
>>>> be
>>>> > updated.
>>>>
>>>> +1
>>>>
>>>> >
>>>> > --
>>>> > Eyal Edri
>>>> > Associate Manager
>>>> > RHEV DevOps
>>>> > EMEA ENG Virtualization R&D
>>>> > Red Hat Israel
>>>> >
>>>> > phone: +972-9-7692018
>>>> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>
>>>> > _______________________________________________
>>>> > Infra mailing list
>>>> > [email protected]
>>>> > http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>>
>>>> --
>>>> David Caro
>>>>
>>>> Red Hat S.L.
>>>> Continuous Integration Engineer - EMEA ENG Virtualization R&D
>>>>
>>>> Tel.: +420 532 294 605
>>>> Email: [email protected]
>>>> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
>>>> Web: www.redhat.com
>>>> RHT Global #: 82-62605
>>>>
>>>
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHEV DevOps
>>> EMEA ENG Virtualization R&D
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>> _______________________________________________
>>> Infra mailing list
>>> [email protected]
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
_______________________________________________
Infra mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/infra

Reply via email to