[ 
https://issues.jenkins-ci.org/browse/JENKINS-12763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on JENKINS-12763 started by David Santiago.

> Excessive lock contention when using mercurial cache with multiple repos and 
> slaves
> -----------------------------------------------------------------------------------
>
>                 Key: JENKINS-12763
>                 URL: https://issues.jenkins-ci.org/browse/JENKINS-12763
>             Project: Jenkins
>          Issue Type: Improvement
>          Components: mercurial
>    Affects Versions: current
>            Reporter: David Santiago
>            Assignee: David Santiago
>              Labels: lock, mercurial, threads
>
> The current implementation of the mercurial plugin uses a too aggressive 
> locking approach for managing its cache across master and build slaves.
> By aggressive I mean that whenever a build starts for a given repository on 
> any given build slave, it will block any subsequent build of any other jobs 
> which share the same repository while it updates the cache in both the master 
> and the slave used for the build, and also creates its working directory. 
> With multiple jobs which share repositories (through different branches, for 
> instance), lock contention ramps up.
> This is not necessary, as a different locking mechanism can be used to allow 
> concurrent builds of jobs which share the same repository but run in 
> different slaves. It can be achieved by using different locks for controlling 
> the updates of master and slave nodes caches.
> This way, we reduce lock contention and increase cache update performance.
> A pull request has been created for this: 
> https://github.com/jenkinsci/mercurial-plugin/pull/21

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.jenkins-ci.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to