[ 
https://issues.apache.org/jira/browse/MRESOLVER-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17173363#comment-17173363
 ] 

Michael Osipov commented on MRESOLVER-123:
------------------------------------------

Alright, it seems that the global lock is absolutely not suited for your case, 
but you never had concurrency issues anyway. You just don't need them. Please 
note that then Takari Local Repository does not keep its promises either.

> Provide a global locking sync context by default
> ------------------------------------------------
>
>                 Key: MRESOLVER-123
>                 URL: https://issues.apache.org/jira/browse/MRESOLVER-123
>             Project: Maven Resolver
>          Issue Type: New Feature
>          Components: resolver
>    Affects Versions: 1.4.2
>            Reporter: Michael Osipov
>            Assignee: Michael Osipov
>            Priority: Critical
>             Fix For: 1.5.0
>
>         Attachments: checksum-error-debug.log, mvn-debug-1.4.3-123.txt.gz
>
>
> This is an umbrella ticket for a long standing issue with Maven Resolver: Our 
> concurrency support is mediocre in a way that if two or more threads try to 
> download the same file and fail to queue those write actions nicely. The 
> problem is that The {{SyncContext}} and the its factory provided by Maven 
> Resolver does not employ any locking at all. As layed out in detail in 
> MRESOLVER-114 we need striped read write locks on artifacts and its metadata. 
> This issue shall track progress on it. Even Takari Concurrent Repository 
> extension does not help because it is only intended to synchronize concurrent 
> access by multple JVMs and not threads.
> This improvement will provide solely a global locking sync context which will 
> work in *single* JVM. It is a non-goal to make it work with mulitple JVMs. A 
> downside of this solution is that is coarse, possible degregation of 
> performance for the sake of stability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to