[jira] [Updated] (MRESOLVER-397) Deprecate Guice modules

2023-09-05 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-397:
--
Priority: Minor  (was: Major)

> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MNG-7870) Undeprecate wrongly deprecated repository metadata

2023-09-05 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MNG-7870:


 Summary: Undeprecate wrongly deprecated repository metadata
 Key: MNG-7870
 URL: https://issues.apache.org/jira/browse/MNG-7870
 Project: Maven
  Issue Type: Task
  Components: Artifacts and Repositories
Reporter: Tamas Cservenak
 Fix For: 3.9.5


In commit 
https://github.com/apache/maven/commit/1af8513fa7512cf25022b249cae0f84062c5085b 
related to MNG-7385 the modello G level metadata was deprecated (by mistake I 
assume).
Undo this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-397) Deprecate Guice modules

2023-09-05 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-397:
-

 Summary: Deprecate Guice modules
 Key: MRESOLVER-397
 URL: https://issues.apache.org/jira/browse/MRESOLVER-397
 Project: Maven Resolver
  Issue Type: Task
Reporter: Tamas Cservenak


So far resolver supported instantiation via:
* sisu components (JSR330) -- as used in Maven
* Guice module
* ServiceLocator

We should drop all non-major ones (guice, sl), as we provided replacement in 
for of resolver provider module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-397) Deprecate Guice modules

2023-09-05 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-397:
--
Fix Version/s: 1.9.16

> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7677) Maven 3.9.0 is ~10% slower than 3.8.7 in large multi-module builds

2023-09-03 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761568#comment-17761568
 ] 

Tamas Cservenak commented on MNG-7677:
--

You plan a PR?

> Maven 3.9.0 is ~10% slower than 3.8.7 in large multi-module builds
> --
>
> Key: MNG-7677
> URL: https://issues.apache.org/jira/browse/MNG-7677
> Project: Maven
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 3.9.0
>Reporter: Petr Široký
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 3.9.1
>
> Attachments: profile-alloc-mvn-validate-3.8.7.html, 
> profile-alloc-mvn-validate-3.9.0.html, profile-cpu-mvn-validate-3.8.7.html, 
> profile-cpu-mvn-validate-3.9.0.html, 
> profile-method-DefaultArtifact._init_-3.8.7.html, 
> profile-method-DefaultArtifact._init_-3.9.0.html, 
> profile-method-DefaultArtifact._init_-3.9.1-SNAPSHOT-reverted-PR166.html
>
>
> When testing the upcoming [Maven release 
> 3.9.0|https://repository.apache.org/content/repositories/maven-1862/org/apache/maven/apache-maven/3.9.0/apache-maven-3.9.0-bin.tar.gz]
>  I noticed the builds for large multi-module projects are somewhat slower.
> I was mainly using the [Quarkus 
> projects|https://github.com/quarkusio/quarkus], which has more than 1k 
> modules. I was specifically bulding the {{2.16.1.Final}} tag. Below are the 
> numbers I got.
> *First batch:*
> {code:java}
> MAVEN_OPTS="-XX:+UseParallelGC -Xms2g -Xmx2g"{code}
> ||Maven cmd||Build time - Maven 3.8.7||Build time - Maven 3.9.0||
> |clean|2.9s|3.2s|
> |validate|02:17min|02:34min|
> |validate -T8|29s|33s|
> |validate -Dversion.enforcer.plugin=3.2.1|29s|40s|
> |validate -Dversion.enforcer.plugin=3.2.1 -T8|9s|14s|
> *Second batch (bigger heap):*
> {code:java}
> MAVEN_OPTS="-XX:+UseParallelGC -Xms5g -Xmx5g"{code}
> ||Maven cmd||Build time - Maven 3.8.7||Build time - Maven 3.9.0||
> |clean|2.9s|3.2s|
> |validate|02:11min|02:22min|
> |validate -T8|25s|28s|
> |validate -Dversion.enforcer.plugin=3.2.1|29s|35s|
> |validate -Dversion.enforcer.plugin=3.2.1 -T8|9s|11s|
> |install -DskipTests -Dversion.enforcer.plugin=3.2.1 -T8|01:49min|01:55min|
> *Notes:*
>  * The numbers are taken from the Maven output (the Wall Clock time). When 
> testing with the time utility, the "real" execution time is about 0.6s longer 
> (the time JVM needs to start-up). The values in the tables are averages over 
> several runs (3 to 10 depending on the time the specific run actually took).
>  * All builds were run on Linux x64 (kernel 6.1.8) with JDK 19.0.1
>  * HW used during testing: AMD Ryzen 5800x (8 physical cores), 32GB RAM and 
> SSD Samsung 980 Pro 1TB (these should not really matter for the comparison 
> itself, but I am including it for completeness)
>  * Using {{ParallelGC}} since it should be the one with highest throughput 
> (at the cost of longer pause times, which I don't really care about for Maven 
> builds). Quick comparison with {{G1GC}} shows the builds are slightly faster, 
> but it is almost negligible.
> There has already been some discussion regarding this in this older merged PR 
> [https://github.com/apache/maven-resolver/pull/166] (which may be the cause 
> behind the slowdown).
> See the attachments for CPU / allocation profiles (flame graphs), running 
> {{mvn validate}} with both 3.8.7 and 3.9.0.
> *My analysis of the attached CPU / allocation profiles:*
> I was mainly looking at the differences in those two profiles (3.8.7 vs 
> 3.9.0), not at the absolute numbers. There are possibly other places that 
> could use some optimization, but that is outside of scope for this one.
> Both CPU and allocation profiles show that the CPU time / allocations 
> increased the most in a code which seems to be doing some sort of dependency 
> resolution, ultimately boiling down to a call to 
> {{{}org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies{}}}.
>  The flame graphs show a bit different method calls inside Aether 
> (maven-resolver), mainly because there are some structural changes (e.g. 
> different class names) to the Aether dependency resolution between 
> {{maven-resolver-1.6.3}} (used in {{{}3.8.7{}}}) and {{maven-resolver-1.9.4}} 
> (used in {{{}3.9.0{}}}). The dependency resolution algorithm should be the 
> same though (or at least as far as I can tell).
> Digging further down, the biggest difference seems to be in the number of 
> calls to {{{}org.eclipse.aether.artifact.DefaultArtifact.{}}}. The 
> constructor includes merging two property maps together, which results in 
> quite some work (CPU) and lots of garbage (HashMap$Node, etc).
> With the above assumption, I went on to profile the number of method 
> (constructor) calls to {{org.eclipse.aether.artifact.DefaultArtifact.}} 
> (see the attachments 
> 

[jira] [Commented] (MNG-7856) Maven Resolver Provider classes ctor change

2023-09-05 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762205#comment-17762205
 ] 

Tamas Cservenak commented on MNG-7856:
--

maven-3.9.x 
https://github.com/apache/maven/commit/1ac8be50c82aa5f0335d5d8fe1da132fbf6573ac
master 
https://github.com/apache/maven/commit/9dd7b01a8953ee31905f0687c1b3fc8713d2e409

> Maven Resolver Provider classes ctor change
> ---
>
> Key: MNG-7856
> URL: https://issues.apache.org/jira/browse/MNG-7856
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> These classes in maven-resolver-provider should get similar change as done in 
> MRESOLVER-386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7856) Maven Resolver Provider classes ctor change

2023-09-05 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7856.

  Assignee: Tamas Cservenak
Resolution: Fixed

> Maven Resolver Provider classes ctor change
> ---
>
> Key: MNG-7856
> URL: https://issues.apache.org/jira/browse/MNG-7856
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> These classes in maven-resolver-provider should get similar change as done in 
> MRESOLVER-386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7856) Maven Resolver Provider classes ctor change

2023-09-05 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7856:
-
Fix Version/s: 4.0.0-alpha-8

> Maven Resolver Provider classes ctor change
> ---
>
> Key: MNG-7856
> URL: https://issues.apache.org/jira/browse/MNG-7856
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> These classes in maven-resolver-provider should get similar change as done in 
> MRESOLVER-386



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7870) Undeprecate wrongly deprecated repository metadata

2023-09-05 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7870:
-
Fix Version/s: 4.0.0-alpha-8

> Undeprecate wrongly deprecated repository metadata
> --
>
> Key: MNG-7870
> URL: https://issues.apache.org/jira/browse/MNG-7870
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> In commit 
> https://github.com/apache/maven/commit/1af8513fa7512cf25022b249cae0f84062c5085b
>  related to MNG-7385 the modello G level metadata was deprecated (by mistake 
> I assume).
> Undo this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761782#comment-17761782
 ] 

Tamas Cservenak commented on MNG-7868:
--

And one more strange thing: your stack trace does not align properly with 
Resolver 1.9.14 that is used in Maven 3.9.4, or do I miss something here?

For ref: 1.9.14 DefaultArtifactResolver
https://github.com/apache/maven-resolver/blob/maven-resolver-1.9.14/maven-resolver-impl/src/main/java/org/eclipse/aether/internal/impl/DefaultArtifactResolver.java#L259



> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761795#comment-17761795
 ] 

Tamas Cservenak commented on MNG-7869:
--

No, here are some examples what I mean:
* hashing all JARs that makes the distro (w/o lib/ext) => this would 
immediately show if some JARs are replaced in distro
* hading lib/ext, as "empty hash" would mean is empty, other hash would mean 
"non empty"
Basically, by having these two we could know if user replaced something in 
distro, or installed some extension into lib/ext

Or, simplest: hashing ALL from maven home (by using some plaf independent way 
for text files line endings) that would be shown just like current git commit 
is. 

Purpose is really just to be immediately tell, is this "tampered" or "vanilla" 
distro user uses.

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761813#comment-17761813
 ] 

Tamas Cservenak commented on MNG-7869:
--

Hashes does not need to be calculated every time, only when executing -v (as it 
is completely different exec branch, and unlike -V it exists).
Also, our users almost always provide mvn -v output, while this plugin would 
need an extra rountrip (and internet access, and whatever, something not always 
available ie. in confined/air gapped environment)

But yes, plugin could do it too.

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761813#comment-17761813
 ] 

Tamas Cservenak edited comment on MNG-7869 at 9/4/23 12:37 PM:
---

Hashes does not need to be calculated every time, only when executing -v (as it 
is completely different exec branch, and unlike -V it exits).
Also, our users almost always provide mvn -v output, while this plugin would 
need an extra rountrip (and internet access, and whatever, something not always 
available ie. in confined/air gapped environment)

But yes, plugin could do it too.


was (Author: cstamas):
Hashes does not need to be calculated every time, only when executing -v (as it 
is completely different exec branch, and unlike -V it exists).
Also, our users almost always provide mvn -v output, while this plugin would 
need an extra rountrip (and internet access, and whatever, something not always 
available ie. in confined/air gapped environment)

But yes, plugin could do it too.

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-6763) Restrict repositories to specific groupIds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761817#comment-17761817
 ] 

Tamas Cservenak commented on MNG-6763:
--

Also, since 3.9.0 you CAN have a profile that is explicitly activated, and 
group all the related resolver properties in there

> Restrict repositories to specific groupIds
> --
>
> Key: MNG-6763
> URL: https://issues.apache.org/jira/browse/MNG-6763
> Project: Maven
>  Issue Type: New Feature
>Reporter: dennis lucero
>Priority: Major
>  Labels: intern
>
> It should be possible to restrict the repositories specified in settings.xml 
> to specific groupIds. Looking at 
> [https://maven.apache.org/ref/3.6.2/maven-settings/settings.html#class_repository],
>  it seems this is currently not the case.
> Background: We use Nexus to host our own artifacts. The settings.xml contains 
> our Nexus repository with always because 
> sometimes a project is built while a dependency is not yet in our Nexus repo 
> – without updatePolicy, it would take 24 hours or manual deletion of metadata 
> to make Maven re-check for the missing dependency.
> Additionally, we use versions-maven-plugin:2.7:display-dependency-updates in 
> our build process.
> This results in lots of queries (more than 300 in a simple Dropwizard 
> project) to our repo which will never succeed. If we could specify that our 
> repo only supplies groupIds beginning with org.example, Maven could skip 
> update checks for groupIds starting with com.fasterxml.jackson and so on, 
> speeding up the build process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-396) Native resolver should retry on http 429

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-396.
-
Resolution: Fixed

> Native resolver should retry on http 429
> 
>
> Key: MRESOLVER-396
> URL: https://issues.apache.org/jira/browse/MRESOLVER-396
> Project: Maven Resolver
>  Issue Type: Improvement
>Reporter: Chris Eldredge
>Priority: Minor
> Fix For: 1.9.16
>
>
> The Wagon http transport provider has custom logic to retry with exponential 
> backoff when putting an artifact to an http endpoint and getting a 429 Too 
> Many Requests response code from the server:
> [https://github.com/apache/maven-wagon/blob/wagon-3.5.3/wagon-providers/wagon-http-shared/src/main/java/org/apache/maven/wagon/shared/http/AbstractHttpClientWagon.java#L828]
> The newer "native" http transporter should provide similar retry logic. One 
> place this could go would be into 
> [HttpTransporter.implPut|[https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427].|https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427).]
> Ideally the transport could be configured to retry on specific error codes, 
> perhaps with 429 and 503 being defaults.
> The lack of retry support on 429s is exacerbate in Maven 3.9 because it 
> enables parallel put by default, which increases requests per second making 
> it more likely that a client would encounter rate limiting or other 
> throttling and overload scenarios.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-396) Native resolver should retry on http 429

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-396:
-

Assignee: Tamas Cservenak

> Native resolver should retry on http 429
> 
>
> Key: MRESOLVER-396
> URL: https://issues.apache.org/jira/browse/MRESOLVER-396
> Project: Maven Resolver
>  Issue Type: Improvement
>Reporter: Chris Eldredge
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> The Wagon http transport provider has custom logic to retry with exponential 
> backoff when putting an artifact to an http endpoint and getting a 429 Too 
> Many Requests response code from the server:
> [https://github.com/apache/maven-wagon/blob/wagon-3.5.3/wagon-providers/wagon-http-shared/src/main/java/org/apache/maven/wagon/shared/http/AbstractHttpClientWagon.java#L828]
> The newer "native" http transporter should provide similar retry logic. One 
> place this could go would be into 
> [HttpTransporter.implPut|[https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427].|https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427).]
> Ideally the transport could be configured to retry on specific error codes, 
> perhaps with 429 and 503 being defaults.
> The lack of retry support on 429s is exacerbate in Maven 3.9 because it 
> enables parallel put by default, which increases requests per second making 
> it more likely that a client would encounter rate limiting or other 
> throttling and overload scenarios.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761829#comment-17761829
 ] 

Tamas Cservenak commented on MNG-7868:
--

And one more thing: 
https://maven.apache.org/resolver/maven-resolver-named-locks/index.html#diagnostic-collection-in-case-of-failures
And on failure please post the diag output here (be aware, that especially 
with big builds, diag will increase heap requirement, so you will need to 
experiment maybe)

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-399) Update Redisson to 3.23.4

2023-09-06 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-399:
-

 Summary: Update Redisson to 3.23.4
 Key: MRESOLVER-399
 URL: https://issues.apache.org/jira/browse/MRESOLVER-399
 Project: Maven Resolver
  Issue Type: Dependency upgrade
  Components: Resolver
Reporter: Tamas Cservenak
 Fix For: 1.9.16






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-399) Update Redisson to 3.23.4

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-399:
-

Assignee: Tamas Cservenak

> Update Redisson to 3.23.4
> -
>
> Key: MRESOLVER-399
> URL: https://issues.apache.org/jira/browse/MRESOLVER-399
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-398) Update Hazelcast to 5.3.2

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-398.
-
Resolution: Fixed

> Update Hazelcast to 5.3.2
> -
>
> Key: MRESOLVER-398
> URL: https://issues.apache.org/jira/browse/MRESOLVER-398
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-399) Update Redisson to 3.23.4

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-399.
-
Resolution: Fixed

> Update Redisson to 3.23.4
> -
>
> Key: MRESOLVER-399
> URL: https://issues.apache.org/jira/browse/MRESOLVER-399
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-397) Deprecate Guice modules

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-397:
--
Component/s: Resolver

> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module, so we provide:
> * sisu components (JSR330)
> * maven-resolver-provider MRESOLVER-387



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-396) Native transport should retry on HTTP 429 (Retry-After)

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-396:
--
Component/s: Resolver

> Native transport should retry on HTTP 429 (Retry-After)
> ---
>
> Key: MRESOLVER-396
> URL: https://issues.apache.org/jira/browse/MRESOLVER-396
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Chris Eldredge
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> The Wagon http transport provider has custom logic to retry with exponential 
> backoff when putting an artifact to an http endpoint and getting a 429 Too 
> Many Requests response code from the server:
> [https://github.com/apache/maven-wagon/blob/wagon-3.5.3/wagon-providers/wagon-http-shared/src/main/java/org/apache/maven/wagon/shared/http/AbstractHttpClientWagon.java#L828]
> The newer "native" http transporter should provide similar retry logic. One 
> place this could go would be into 
> [HttpTransporter.implPut|[https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427].|https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427).]
> Ideally the transport could be configured to retry on specific error codes, 
> perhaps with 429 and 503 being defaults.
> The lack of retry support on 429s is exacerbate in Maven 3.9 because it 
> enables parallel put by default, which increases requests per second making 
> it more likely that a client would encounter rate limiting or other 
> throttling and overload scenarios.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-344) Upgrade Maven to 3.9.4

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-344:
--
Summary: Upgrade Maven to 3.9.4  (was: Upgrade Maven to 3.9.1)

> Upgrade Maven to 3.9.4
> --
>
> Key: MRESOLVER-344
> URL: https://issues.apache.org/jira/browse/MRESOLVER-344
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Ant Tasks
>Reporter: Sylwester Lachiewicz
>Assignee: Sylwester Lachiewicz
>Priority: Major
> Fix For: ant-tasks-next
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-344) Upgrade Maven to 3.9.4

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-344:
--
Description: Upgrade to Maven 3.9.4 and Resolver 1.9.15

> Upgrade Maven to 3.9.4
> --
>
> Key: MRESOLVER-344
> URL: https://issues.apache.org/jira/browse/MRESOLVER-344
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Ant Tasks
>Reporter: Sylwester Lachiewicz
>Assignee: Sylwester Lachiewicz
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Upgrade to Maven 3.9.4 and Resolver 1.9.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-397) Deprecate Guice modules

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-397:
--
Description: 
So far resolver supported instantiation via:
* sisu components (JSR330) -- as used in Maven
* Guice module
* ServiceLocator

We should drop all non-major ones (guice, sl), as we provided replacement in 
for of resolver provider module, so we provide:
* sisu components (JSR330)
* maven-resolver-provider

  was:
So far resolver supported instantiation via:
* sisu components (JSR330) -- as used in Maven
* Guice module
* ServiceLocator

We should drop all non-major ones (guice, sl), as we provided replacement in 
for of resolver provider module.


> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module, so we provide:
> * sisu components (JSR330)
> * maven-resolver-provider



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7859) Update to Resolver 1.9.16

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7859:
-
Fix Version/s: 4.0.0-alpha-8

> Update to Resolver 1.9.16
> -
>
> Key: MNG-7859
> URL: https://issues.apache.org/jira/browse/MNG-7859
> Project: Maven
>  Issue Type: Dependency upgrade
>  Components: Dependencies
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> When 1.9.15 is released, update to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MNG-7872) Deprecate org.apache.maven.repository.internal.MavenResolverModule

2023-09-06 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MNG-7872:


 Summary: Deprecate 
org.apache.maven.repository.internal.MavenResolverModule
 Key: MNG-7872
 URL: https://issues.apache.org/jira/browse/MNG-7872
 Project: Maven
  Issue Type: Task
Reporter: Tamas Cservenak
 Fix For: 4.0.0-alpha-8, 3.9.5






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7872) Deprecate org.apache.maven.repository.internal.MavenResolverModule

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7872.

Resolution: Fixed

Done as part of MNG-7856

> Deprecate org.apache.maven.repository.internal.MavenResolverModule
> --
>
> Key: MNG-7872
> URL: https://issues.apache.org/jira/browse/MNG-7872
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MNG-7872) Deprecate org.apache.maven.repository.internal.MavenResolverModule

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MNG-7872:


Assignee: Tamas Cservenak

> Deprecate org.apache.maven.repository.internal.MavenResolverModule
> --
>
> Key: MNG-7872
> URL: https://issues.apache.org/jira/browse/MNG-7872
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7872) Deprecate org.apache.maven.repository.internal.MavenResolverModule

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7872:
-
Component/s: General

> Deprecate org.apache.maven.repository.internal.MavenResolverModule
> --
>
> Key: MNG-7872
> URL: https://issues.apache.org/jira/browse/MNG-7872
> Project: Maven
>  Issue Type: Task
>  Components: General
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-397) Deprecate Guice modules

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-397.
-
Resolution: Fixed

> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module, so we provide:
> * sisu components (JSR330)
> * maven-resolver-provider MRESOLVER-387



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-397) Deprecate Guice modules

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-397:
-

Assignee: Tamas Cservenak

> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module, so we provide:
> * sisu components (JSR330)
> * maven-resolver-provider MRESOLVER-387



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-398) Update Hazelcast to 5.3.2

2023-09-06 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-398:
-

 Summary: Update Hazelcast to 5.3.2
 Key: MRESOLVER-398
 URL: https://issues.apache.org/jira/browse/MRESOLVER-398
 Project: Maven Resolver
  Issue Type: Dependency upgrade
  Components: Resolver
Reporter: Tamas Cservenak
 Fix For: 1.9.16






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-398) Update Hazelcast to 5.3.2

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-398:
-

Assignee: Tamas Cservenak

> Update Hazelcast to 5.3.2
> -
>
> Key: MRESOLVER-398
> URL: https://issues.apache.org/jira/browse/MRESOLVER-398
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MNG-7870) Undeprecate wrongly deprecated repository metadata

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MNG-7870:


Assignee: Tamas Cservenak

> Undeprecate wrongly deprecated repository metadata
> --
>
> Key: MNG-7870
> URL: https://issues.apache.org/jira/browse/MNG-7870
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> In commit 
> https://github.com/apache/maven/commit/1af8513fa7512cf25022b249cae0f84062c5085b
>  related to MNG-7385 the modello G level metadata was deprecated (by mistake 
> I assume).
> Undo this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-396) Native transport should retry on HTTP 429 (Retry-After)

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-396:
--
Summary: Native transport should retry on HTTP 429 (Retry-After)  (was: 
Native resolver should retry on http 429)

> Native transport should retry on HTTP 429 (Retry-After)
> ---
>
> Key: MRESOLVER-396
> URL: https://issues.apache.org/jira/browse/MRESOLVER-396
> Project: Maven Resolver
>  Issue Type: Improvement
>Reporter: Chris Eldredge
>Assignee: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> The Wagon http transport provider has custom logic to retry with exponential 
> backoff when putting an artifact to an http endpoint and getting a 429 Too 
> Many Requests response code from the server:
> [https://github.com/apache/maven-wagon/blob/wagon-3.5.3/wagon-providers/wagon-http-shared/src/main/java/org/apache/maven/wagon/shared/http/AbstractHttpClientWagon.java#L828]
> The newer "native" http transporter should provide similar retry logic. One 
> place this could go would be into 
> [HttpTransporter.implPut|[https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427].|https://github.com/apache/maven-resolver/blob/master/maven-resolver-transport-http/src/main/java/org/eclipse/aether/transport/http/HttpTransporter.java#L427).]
> Ideally the transport could be configured to retry on specific error codes, 
> perhaps with 429 and 503 being defaults.
> The lack of retry support on 429s is exacerbate in Maven 3.9 because it 
> enables parallel put by default, which increases requests per second making 
> it more likely that a client would encounter rate limiting or other 
> throttling and overload scenarios.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-397) Deprecate Guice modules

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-397:
--
Description: 
So far resolver supported instantiation via:
* sisu components (JSR330) -- as used in Maven
* Guice module
* ServiceLocator

We should drop all non-major ones (guice, sl), as we provided replacement in 
for of resolver provider module, so we provide:
* sisu components (JSR330)
* maven-resolver-provider MRESOLVER-387

  was:
So far resolver supported instantiation via:
* sisu components (JSR330) -- as used in Maven
* Guice module
* ServiceLocator

We should drop all non-major ones (guice, sl), as we provided replacement in 
for of resolver provider module, so we provide:
* sisu components (JSR330)
* maven-resolver-provider


> Deprecate Guice modules
> ---
>
> Key: MRESOLVER-397
> URL: https://issues.apache.org/jira/browse/MRESOLVER-397
> Project: Maven Resolver
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Minor
> Fix For: 1.9.16
>
>
> So far resolver supported instantiation via:
> * sisu components (JSR330) -- as used in Maven
> * Guice module
> * ServiceLocator
> We should drop all non-major ones (guice, sl), as we provided replacement in 
> for of resolver provider module, so we provide:
> * sisu components (JSR330)
> * maven-resolver-provider MRESOLVER-387



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MRESOLVER-161) Resolve circular dependency resolver -> maven -> resolver

2023-09-10 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763422#comment-17763422
 ] 

Tamas Cservenak commented on MRESOLVER-161:
---

Yup, close. Solution for this would be to lump resolver into maven itself...

> Resolve circular dependency resolver -> maven -> resolver
> -
>
> Key: MRESOLVER-161
> URL: https://issues.apache.org/jira/browse/MRESOLVER-161
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Resolver
>Reporter: Tamas Cservenak
>Priority: Major
>
> Apache Maven Resolver has a module {{maven-resolver-demo-snippets}} that in 
> turn depends on Maven {{maven-resolver-provider}} that naturally depends on 
> Maven Resolver.
> Since MRESOLVER-154 is implemented, the "demo snippets" does not work: 
> maven-resolver has class moved (introduced binary incompatibility), while 
> maven-resolver-provider expects it in it's old place. All in all, this 
> "cycle" is actually bad, as same issue will hit us with removal of service 
> locator as well MRESOLVER-157.
> Proposals: move the "resolver demos" out of resolver project completely, as 
> it needs maven to make resolver usable with maven metadata/pom, but, due this 
> cycle it puts everything into concrete, and harder to change thing. Or, any 
> other idea?
> Rationale:
> - "resolver demos" are just that: a showcase how to use resolver (with 
> maven), but alas, maven-resolver is incomplete in this respect (as it lacks 
> maven models, mode builder, etc), so the cycle is here due those bits. So, 
> imo a separate project/repo is most probably justified for it, as they are 
> NOT executed/run during maven-resolver build anyway, the fact they are broken 
> (see MRESOLVER-162) was discovered by manually running them).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MRESOLVER-146) method to get artifact publication date

2023-09-10 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763423#comment-17763423
 ] 

Tamas Cservenak commented on MRESOLVER-146:
---

Close it please

> method to get artifact publication date
> ---
>
> Key: MRESOLVER-146
> URL: https://issues.apache.org/jira/browse/MRESOLVER-146
> Project: Maven Resolver
>  Issue Type: New Feature
>  Components: Resolver
>Reporter: Th Stock
>Priority: Major
>
> I'm looking for a method to get the date of artifact publication in 
> org.eclipse.aether.RepositorySystem
> e.g. 
> "org.apache.maven.resolver" % "maven-resolver-api" % "1.6.1" => 2020-10-02 
> 18:05
> https://repo1.maven.org/maven2/org/apache/maven/resolver/maven-resolver-api/1.6.1/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-402:
-

 Summary: Properly expose resolver configuration
 Key: MRESOLVER-402
 URL: https://issues.apache.org/jira/browse/MRESOLVER-402
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


Make resolver fully configurable via Ant user properties or Java System 
Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-402:
-

Assignee: Tamas Cservenak

> Properly expose resolver configuration
> --
>
> Key: MRESOLVER-402
> URL: https://issues.apache.org/jira/browse/MRESOLVER-402
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Make resolver fully configurable via Ant user properties or Java System 
> Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-403.
-
Resolution: Fixed

> Support depMgt for transitive dependencies
> --
>
> Key: MRESOLVER-403
> URL: https://issues.apache.org/jira/browse/MRESOLVER-403
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-402.
-
Resolution: Fixed

> Properly expose resolver configuration
> --
>
> Key: MRESOLVER-402
> URL: https://issues.apache.org/jira/browse/MRESOLVER-402
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Make resolver fully configurable via Ant user properties or Java System 
> Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-403:
-

Assignee: Tamas Cservenak

> Support depMgt for transitive dependencies
> --
>
> Key: MRESOLVER-403
> URL: https://issues.apache.org/jira/browse/MRESOLVER-403
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 8:53 AM:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs instead 
of slf4j-api directly. This will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> 

[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 8:53 AM:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> 

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

Still, with proper "segmenting" of the clusters, for example to have them split 
for "related" job groups, hence, the Artifacts coming thru them would remain 
somewhat within limited boundaries, or some automation for "cluster regular 
reboot", or simply just create "huge enough" clusters, may make users benefit 
of never hitting these issues (cluster OOM). 

And current code is most probably the fastest solution, hence, I just created 
this issue to have it documented, but i plan no meritorious work on this topic.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). 

[jira] [Updated] (MRESOLVER-404) New strategy may be needed for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Summary: New strategy may be needed for Hazelcast named locks  (was: New 
strategy for Hazelcast named locks)

> New strategy may be needed for Hazelcast named locks
> 
>
> Key: MRESOLVER-404
> URL: https://issues.apache.org/jira/browse/MRESOLVER-404
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Priority: Major
>
> Originally (for today, see below) Hazelcast NamedLock implementation worked 
> like this:
> * on lock acquire, an ISemaphore DO with lock name is created (or just get, 
> if exists), is refCounted
> * on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
> (releasing HZ cluster resources)
> * if after some time, a new lock acquire happened for same name, ISemaphore 
> DO would get re-created.
> Today, HZ NamedLocks implementation works in following way:
> * there is only one Semaphore provider implementation, the 
> {{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
> Distributed Object (DO) name and does not destroys the DO
> Reason for this is historical: originally, named locks precursor code was 
> done for Hazelcast 2/3, that used "unreliable" distributed operations, and 
> recreating previously destroyed DO was possible (at the cost of 
> "unreliability").
> Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
> reliable, it was at the cost that DOs once created, then destroyed, could not 
> be recreated anymore. This change was applied to 
> {{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
> ISemaphores (release semaphore is no-op method).
> But, this has an important consequence: a long running Hazelcast cluster will 
> have more and more ISemaphore DOs (basically as many as many Artifacts all 
> the builds met, that use this cluster to coordinate). Artifacts count 
> existing out there is not infinite, but is large enough -- especially if 
> cluster shared across many different/unrelated builds -- to grow over sane 
> limit.
> So, current recommendation is to have "large enough" dedicated Hazelcast 
> cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
> connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
> client", so puts burden onto JVM process running it as node, hence Maven as 
> well). But even then, regular reboot of cluster may be needed.
> A proper but somewhat complicated solution would be to introduce some sort of 
> indirection: create as many ISemaphore as needed at the moment, and map those 
> onto locks names in use at the moment (and reuse unused semaphores). Problem 
> is, that mapping would need to be distributed as well (so all clients pick 
> them up, or perform new mapping), and this may cause performance penalty. But 
> this could be proved by exhaustive perf testing only.
> The benefit would be obvious: today cluster holds as many ISemaphores as many 
> Artifacts were met by all the builds, that use given cluster since cluster 
> boot. With indirection, the number of DOs would lowered to "maximum 
> concurrently used", so if you have a large build farm, that is able to juggle 
> with 1000 artifacts at given one moment, your cluster would have 1000 
> ISemaphores.
> Still, with proper "segmenting" of the clusters, for example to have them 
> split for "related" job groups, hence, the Artifacts coming thru them would 
> remain somewhat within limited boundaries, or some automation for "cluster 
> regular reboot", or simply just create "huge enough" clusters, may make users 
> benefit of never hitting these issues (cluster OOM). 
> And current code is most probably the fastest solution, hence, I just created 
> this issue to have it documented, but i plan no meritorious work on this 
> topic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762630#comment-17762630
 ] 

Tamas Cservenak commented on MNG-7868:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs instead 
of slf4j-api directly. This will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-344) Upgrade Maven to 3.9.4, Resolver 1.9.15

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-344:
--
Summary: Upgrade Maven to 3.9.4, Resolver 1.9.15  (was: Upgrade Maven to 
3.9.4)

> Upgrade Maven to 3.9.4, Resolver 1.9.15
> ---
>
> Key: MRESOLVER-344
> URL: https://issues.apache.org/jira/browse/MRESOLVER-344
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Ant Tasks
>Reporter: Sylwester Lachiewicz
>Assignee: Sylwester Lachiewicz
>Priority: Major
> Fix For: ant-tasks-1.5.0
>
>
> Upgrade to Maven 3.9.4 and Resolver 1.9.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-401) Drop use of SL and deprecated stuff, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-401:
--
Summary: Drop use of SL and deprecated stuff, up version to 1.5.0  (was: 
Drop use of SL, up version to 1.5.0)

> Drop use of SL and deprecated stuff, up version to 1.5.0
> 
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-1.5.0
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-403:
-

 Summary: Support depMgt for transitive dependencies
 Key: MRESOLVER-403
 URL: https://issues.apache.org/jira/browse/MRESOLVER-403
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762653#comment-17762653
 ] 

Tamas Cservenak commented on MNG-7868:
--

And finally one more remark: IF my theory of "hot artifacts" is right, with 
"properly big (or having some attributes like layout or something am not aware 
yet)" project should always manifest, unrelated to lock implementation used (so 
it should happen with JVM-local locking, file locking but also 
Hazelcast/Redisson). This is somewhat confirmed on the mvnd issue.

I have to add, and was stated on mvnd issue, that Windows FS locking most 
probably just adds another layer of "uncertainty", so IMHO use of WinFS is NOT 
a prerequisite for this bug, but it may just exacerbate it.

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7870) Undeprecate wrongly deprecated repository metadata

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7870.

Resolution: Fixed

maven-3.9.x: 
https://github.com/apache/maven/commit/84ee422e65e86fc866c94dec162311b46a27f187
master: 
https://github.com/apache/maven/commit/1c050eee7bc21b5b6ea3d774ace255eba85e20e2

> Undeprecate wrongly deprecated repository metadata
> --
>
> Key: MNG-7870
> URL: https://issues.apache.org/jira/browse/MNG-7870
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> In commit 
> https://github.com/apache/maven/commit/1af8513fa7512cf25022b249cae0f84062c5085b
>  related to MNG-7385 the modello G level metadata was deprecated (by mistake 
> I assume).
> Undo this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 11:10 AM:
---

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-apim lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the 

[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 11:10 AM:
---

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-api, lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-apim lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with 

[jira] [Created] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-404:
-

 Summary: New strategy for Hazelcast named locks
 Key: MRESOLVER-404
 URL: https://issues.apache.org/jira/browse/MRESOLVER-404
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Resolver
Reporter: Tamas Cservenak


Originally (and even today, but see below) Hazelcast NamedLock implementation 
worked like this:
* on lock acquire, an ISemaphore DO is created (or just get if exists), is 
refCounted
* on lock release, if refCount shows = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-401) Drop use of SL, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-401.
-
Resolution: Fixed

> Drop use of SL, up version to 1.5.0
> ---
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-401) Drop use of SL, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-401:
-

Assignee: Tamas Cservenak

> Drop use of SL, up version to 1.5.0
> ---
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-400) Update to parent POM 40, reformat

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-400:
-

Assignee: Tamas Cservenak

> Update to parent POM 40, reformat
> -
>
> Key: MRESOLVER-400
> URL: https://issues.apache.org/jira/browse/MRESOLVER-400
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Update parent to POM 40, reformat sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

  was:
Originally (and even today, but see below) Hazelcast NamedLock implementation 
worked like this:
* on lock acquire, an ISemaphore DO is created (or just get if exists), is 
refCounted
* on lock release, if refCount shows = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But 

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With mapping, number of DOs would lowered to "maximum at one moment" (so 
if you have a large build farm that juggles with 1000 artifacts at one moment, 
you'd have 1000).

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated 

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of 

[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762618#comment-17762618
 ] 

Tamas Cservenak commented on MNG-7868:
--

As mentioned on linked mvnd GH issue:
* my suspicion is about "hot artifacts" (commonly used libraries across MANY 
modules, typical examples are slf4j-api or alike ones)
* lock dump (emitted if lock diag enabled AND error like you reported happens) 
will make us able to either prove my "hot artifacts" theory, OR totally dismiss 
it (and then look somewhere else).

To interpret lock diag dump, project knowledge is needed (as it is very low 
level, will emit lock names with refs and acquired locksteps), as locks names 
(hopefully file-gav mapper used, not file-hgav that obfuscates/sha1 locks 
names) will contain {{G~A~V}} strings, and it is the project owner (and not me) 
who is be able to identify "in-reactor" and "external dep" artifacts from it...

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause 

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of 

[jira] [Comment Edited] (MRESOLVER-387) Provide "static" supplier for RepositorySystem

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762778#comment-17762778
 ] 

Tamas Cservenak edited comment on MRESOLVER-387 at 9/7/23 3:13 PM:
---

Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/pull/28


was (Author: cstamas):
Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/commit/95e85e6deac3217fa905f17f836c2716a10673e7

> Provide "static" supplier for RepositorySystem
> --
>
> Key: MRESOLVER-387
> URL: https://issues.apache.org/jira/browse/MRESOLVER-387
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.15
>
>
> To provide SL replacement.
> Something like this
> https://github.com/maveniverse/mima/blob/main/runtime/standalone-static/src/main/java/eu/maveniverse/maven/mima/runtime/standalonestatic/RepositorySystemSupplier.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MRESOLVER-387) Provide "static" supplier for RepositorySystem

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762778#comment-17762778
 ] 

Tamas Cservenak commented on MRESOLVER-387:
---

Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/commit/95e85e6deac3217fa905f17f836c2716a10673e7

> Provide "static" supplier for RepositorySystem
> --
>
> Key: MRESOLVER-387
> URL: https://issues.apache.org/jira/browse/MRESOLVER-387
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.15
>
>
> To provide SL replacement.
> Something like this
> https://github.com/maveniverse/mima/blob/main/runtime/standalone-static/src/main/java/eu/maveniverse/maven/mima/runtime/standalonestatic/RepositorySystemSupplier.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-6763) Restrict repositories to specific groupIds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761815#comment-17761815
 ] 

Tamas Cservenak commented on MNG-6763:
--

With Maven 3.9.3+ (but recommended 3.9.4) you got the new expression and you 
can check prefixes in along sources, check out updated demo here 
https://github.com/cstamas/rrf-demo/tree/master/.mvn

> Restrict repositories to specific groupIds
> --
>
> Key: MNG-6763
> URL: https://issues.apache.org/jira/browse/MNG-6763
> Project: Maven
>  Issue Type: New Feature
>Reporter: dennis lucero
>Priority: Major
>  Labels: intern
>
> It should be possible to restrict the repositories specified in settings.xml 
> to specific groupIds. Looking at 
> [https://maven.apache.org/ref/3.6.2/maven-settings/settings.html#class_repository],
>  it seems this is currently not the case.
> Background: We use Nexus to host our own artifacts. The settings.xml contains 
> our Nexus repository with always because 
> sometimes a project is built while a dependency is not yet in our Nexus repo 
> – without updatePolicy, it would take 24 hours or manual deletion of metadata 
> to make Maven re-check for the missing dependency.
> Additionally, we use versions-maven-plugin:2.7:display-dependency-updates in 
> our build process.
> This results in lots of queries (more than 300 in a simple Dropwizard 
> project) to our repo which will never succeed. If we could specify that our 
> repo only supplies groupIds beginning with org.example, Maven could skip 
> update checks for groupIds starting with com.fasterxml.jackson and so on, 
> speeding up the build process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7845) Slow upload of artifacts to remote repository

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7845.

Resolution: Cannot Reproduce

Will reopen once we (if we) have response.

> Slow upload of artifacts to remote repository
> -
>
> Key: MNG-7845
> URL: https://issues.apache.org/jira/browse/MNG-7845
> Project: Maven
>  Issue Type: Bug
>  Components: Deployment
>Affects Versions: 3.9.1
>Reporter: Filip Pitak
>Priority: Critical
>
> After an upgrade of maven from v3.3.9 to v3.9.1 the deployment of artifacts 
> to a remote repository via _*'mvn deploy'*_ has been substantially slowed 
> down. Disregarding the time for installing and other steps, the deployment of 
> artifacts went from +-40seconds to 6minutes. As our projects consists of 
> multiple modules, the total time of deployment jumped from 20minutes to 
> 1h20min. For our remote repository we are using Sonatype Nexus RepositoryOSS 
> 3.56.0-01.
> In the newer version maven has switched from Wagon to using "native HTTP", 
> which caused this issue.
> I've solved the problem by calling _*"mvn deploy  
> -Dmaven.resolver.transport=wagon"*_ and also downgrading the version of 
> Wagon. Various versions of Wagon had a different result in upload speeds:
>  
> ||Wagon version||Deploy speed||
> |2.7|25 sec.|
> |2.10|35 sec|
> |3.x.x|4 min.|
>  
> This solution is not viable in the long-run as it requires using an ancient 
> transport resolver and downgrading the version, which is not future-proof. A 
> similar solution of speeding up the deployment should be available when using 
> the default {*}{{maven-resolver-transport-http}} aka _native HTTP_ 
> transport{*}. Is there an existing implementation, which {*}{{*}}could be 
> used here?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MNG-7869:


 Summary: Improve mvn -v output
 Key: MNG-7869
 URL: https://issues.apache.org/jira/browse/MNG-7869
 Project: Maven
  Issue Type: Task
Reporter: Tamas Cservenak


This is really just a dream or wish: would be good if mvn -v (output we usually 
ask from users) would show us more than today, to be able to detect like 
"tampering with resolver" (or replacing it), possible core extensions, etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761795#comment-17761795
 ] 

Tamas Cservenak edited comment on MNG-7869 at 9/4/23 11:51 AM:
---

No, here are some examples what I mean:
* hashing all JARs that makes the distro (w/o lib/ext) => this would 
immediately show if some JARs are replaced in distro
* hading lib/ext, as "empty hash" would mean is empty, other hash would mean 
"non empty"
Basically, by having these two we could know if user replaced something in 
distro, or installed some extension into lib/ext

Or, simplest: hashing ALL from maven home (by using some plaf independent way 
for text files line endings) that would be shown just like current git commit 
is. 

Purpose is really just to be able to immediately tell, is this "tampered" 
(altered as with extension or even JAR replacement) or "vanilla" distro, that 
user uses.


was (Author: cstamas):
No, here are some examples what I mean:
* hashing all JARs that makes the distro (w/o lib/ext) => this would 
immediately show if some JARs are replaced in distro
* hading lib/ext, as "empty hash" would mean is empty, other hash would mean 
"non empty"
Basically, by having these two we could know if user replaced something in 
distro, or installed some extension into lib/ext

Or, simplest: hashing ALL from maven home (by using some plaf independent way 
for text files line endings) that would be shown just like current git commit 
is. 

Purpose is really just to be able to immediately tell, is this "tampered" 
(altered as with extension or even JAR replacement) or "vanilla" distro user 
uses.

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761795#comment-17761795
 ] 

Tamas Cservenak edited comment on MNG-7869 at 9/4/23 11:51 AM:
---

No, here are some examples what I mean:
* hashing all JARs that makes the distro (w/o lib/ext) => this would 
immediately show if some JARs are replaced in distro
* hading lib/ext, as "empty hash" would mean is empty, other hash would mean 
"non empty"
Basically, by having these two we could know if user replaced something in 
distro, or installed some extension into lib/ext

Or, simplest: hashing ALL from maven home (by using some plaf independent way 
for text files line endings) that would be shown just like current git commit 
is. 

Purpose is really just to be able to immediately tell, is this "tampered" 
(altered as with extension or even JAR replacement) or "vanilla" distro user 
uses.


was (Author: cstamas):
No, here are some examples what I mean:
* hashing all JARs that makes the distro (w/o lib/ext) => this would 
immediately show if some JARs are replaced in distro
* hading lib/ext, as "empty hash" would mean is empty, other hash would mean 
"non empty"
Basically, by having these two we could know if user replaced something in 
distro, or installed some extension into lib/ext

Or, simplest: hashing ALL from maven home (by using some plaf independent way 
for text files line endings) that would be shown just like current git commit 
is. 

Purpose is really just to be immediately tell, is this "tampered" or "vanilla" 
distro user uses.

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761798#comment-17761798
 ] 

Tamas Cservenak edited comment on MNG-7869 at 9/4/23 11:57 AM:
---

Example (imagined) output with 2nd method:
{noformat}
$ mvn -v
Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
Maven home: /home/cstamas/.sdkman/candidates/maven/current 
(12345678901234567889012344566)   <- some plaf independent hash of all things 
under maven.home
Java version: 17.0.8.1, vendor: Eclipse Adoptium, runtime: 
/home/cstamas/.sdkman/candidates/java/17.0.8.1-tem
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "6.4.13-200.fc38.x86_64", arch: "amd64", family: 
"unix"
$ 
{noformat}


was (Author: cstamas):
Example (imagined) output with 2nd method:
{noformat}
$ mvn -v
Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
Maven home: /home/cstamas/.sdkman/candidates/maven/current 
(dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)   <- some plaf independent hash of 
all things under maven.home
Java version: 17.0.8.1, vendor: Eclipse Adoptium, runtime: 
/home/cstamas/.sdkman/candidates/java/17.0.8.1-tem
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "6.4.13-200.fc38.x86_64", arch: "amd64", family: 
"unix"
$ 
{noformat}

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7869) Improve mvn -v output

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761798#comment-17761798
 ] 

Tamas Cservenak commented on MNG-7869:
--

Example (imagined) output with 2nd method:
{noformat}
$ mvn -v
Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
Maven home: /home/cstamas/.sdkman/candidates/maven/current 
(dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)   <- some plaf independent hash of 
all things under maven.home
Java version: 17.0.8.1, vendor: Eclipse Adoptium, runtime: 
/home/cstamas/.sdkman/candidates/java/17.0.8.1-tem
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "6.4.13-200.fc38.x86_64", arch: "amd64", family: 
"unix"
$ 
{noformat}

> Improve mvn -v output
> -
>
> Key: MNG-7869
> URL: https://issues.apache.org/jira/browse/MNG-7869
> Project: Maven
>  Issue Type: Task
>Reporter: Tamas Cservenak
>Priority: Major
>
> This is really just a dream or wish: would be good if mvn -v (output we 
> usually ask from users) would show us more than today, to be able to detect 
> like "tampering with resolver" (or replacing it), possible core extensions, 
> etc...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7840) Maven pom relocation do not work with parent pom

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7840.

Resolution: Won't Fix

Using relocation within boundaries of built project is just wrong and is not 
what this feature was meant for.

> Maven pom relocation do not work with parent pom
> 
>
> Key: MNG-7840
> URL: https://issues.apache.org/jira/browse/MNG-7840
> Project: Maven
>  Issue Type: Bug
>  Components: Artifacts and Repositories, Dependencies
>Affects Versions: 3.9.2
> Environment: Apache Maven 3.9.2 
> (c9616018c7a021c1c39be70fb2843d6f5f9b8a1c)
> Java version: 17.0.5, vendor: GraalVM Community
> Default locale: fr_FR, platform encoding: UTF-8
> OS name: "mac os x", version: "13.4.1", arch: "aarch64", family: "mac"
>Reporter: jycr
>Priority: Major
>
> Relocating parent POM do not work.
> Reproduce steps:
>  # Create following POM files :
> {code:xml|title=parent-pom-a.pom}
> 
> http://maven.apache.org/POM/4.0.0; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> https://maven.apache.org/xsd/maven-4.0.0.xsd;>
> 4.0.0
> com.acme
> parent-pom-a
> 1.0.0
> pom
> 
> 
> com.acme
> parent-pom-b
> 1.0.0
> 
> 
> 
> {code}
> {code:xml|title=parent-pom-b.pom}
> 
> http://maven.apache.org/POM/4.0.0; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> https://maven.apache.org/xsd/maven-4.0.0.xsd;>
> 4.0.0
> com.acme
> parent-pom-b
> 1.0.0
> pom
> 
> b
> 
> 
> {code}
> {code:xml|title=project-pom.pom}
> 
> http://maven.apache.org/POM/4.0.0; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> https://maven.apache.org/xsd/maven-4.0.0.xsd;>
>   4.0.0
>   
>   com.acme
>   parent-pom-a
>   1.0.0
>   
>   project-pom
>   pom
>   
>   
> UTF-8
>   
> ${project.build.sourceEncoding}
>   
> ${project.build.sourceEncoding}
>   
>   
>   
>   
>   
> com.github.ekryd.echo-maven-plugin
>   echo-maven-plugin
>   2.0.0
>   
>   
>   
>   echo
>   
>   initialize
>   
>   
> properties-from-parent-pom: ${properties-from-parent-pom}
>   
>   
>   
>   
>   
>   
> 
> {code}
> # Install parent POM files into local cache:
> {code:sh}
> mvn install -f parent-pom-a.pom
> mvn install -f parent-pom-b.pom
> {code}
> # Execute Maven with {{project-pom.pom}}
> {code:sh}
> mvn initialize -f project-pom.pom
> {code}
> Actual result:
> {code}
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < com.acme:project-pom 
> >
> [INFO] Building project-pom 1.0.0
> [INFO]   from project-pom.pom
> [INFO] [ pom 
> ]-
> [INFO] 
> [INFO] --- echo:2.0.0:echo (default) @ project-pom ---
> [INFO] properties-from-parent-pom: ${properties-from-parent-pom}
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time:  0.145 s
> [INFO] Finished at: 2023-07-10T09:47:45+02:00
> [INFO] 
> 
> {code}
> Expected result:
> {code}
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < com.acme:project-pom 
> >
> [INFO] Building project-pom 1.0.0
> [INFO]   from project-pom.pom
> [INFO] [ pom 
> ]-
> [WARNING] The artifact com.acme:parent-pom-a:pom:1.0.0 has been relocated to 
> com.acme:parent-pom-b:pom:1.0.0
> [INFO] 
> [INFO] --- echo:2.0.0:echo (default) @ project-pom ---
> [INFO] properties-from-parent-pom: b
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 

[jira] [Updated] (MNG-7859) Update to Resolver 1.9.16

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7859:
-
Summary: Update to Resolver 1.9.16  (was: Update to Resolver 1.9.15)

> Update to Resolver 1.9.16
> -
>
> Key: MNG-7859
> URL: https://issues.apache.org/jira/browse/MNG-7859
> Project: Maven
>  Issue Type: Dependency upgrade
>  Components: Dependencies
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 3.9.5
>
>
> When 1.9.15 is released, update to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7858) Deprecate class MavenRepositorySystemUtils in resolver provider

2023-09-04 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7858.

Fix Version/s: (was: 3.9.5)
   Resolution: Won't Fix

> Deprecate class MavenRepositorySystemUtils in resolver provider
> ---
>
> Key: MNG-7858
> URL: https://issues.apache.org/jira/browse/MNG-7858
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Priority: Major
>
> As new resolver module introduced in 1.9.15 provides replacement (and 
> deprecated SL). The class once SL is dropped (resolver 2.0.0) should be 
> altered to redirects (or fail in case of SL?).
> This class: 
> https://github.com/apache/maven/blob/maven-3.9.4/maven-resolver-provider/src/main/java/org/apache/maven/repository/internal/MavenRepositorySystemUtils.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761778#comment-17761778
 ] 

Tamas Cservenak commented on MNG-7868:
--

As experiment, could you do an experiment? 
- do a clean checkout somewhere
- use empty (non existed) local repository
- do the build on 1 thread to make local repo populated
- repeat the build in MT mode

Of course, not needed, if you do see same errors on prepopulated local 
repositories...

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761777#comment-17761777
 ] 

Tamas Cservenak commented on MNG-7868:
--

Do you see some correlation of this bug and state of local repository (ie. 
empty, or fully populated, so no download happens)?

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MWRAPPER-116) Proxy authentification regression

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MWRAPPER-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761878#comment-17761878
 ] 

Tamas Cservenak commented on MWRAPPER-116:
--

This is not wrapper issue per se, but you also went with Maven 3.9.x.
Please read pages 
https://maven.apache.org/guides/mini/guide-resolver-transport.html and 
(hopefully 3.9.4) release notes as well 
https://maven.apache.org/docs/3.9.4/release-notes.html

> Proxy authentification regression
> -
>
> Key: MWRAPPER-116
> URL: https://issues.apache.org/jira/browse/MWRAPPER-116
> Project: Maven Wrapper
>  Issue Type: Bug
>  Components: Maven Wrapper Scripts
>Affects Versions: 3.2.0
>Reporter: Mathieu CARBONNEAUX
>Priority: Major
>
> when update the wrapper to 3.9.x+ (on 3.8.2 is working fine) i take "status 
> code: 407, reason phrase: authenticationrequired (407)" when he try to 
> download some artifact (not all), espacialy maven plugin 
> (surefire/jar/compile) one.
>  
> {code:java}
> Failed to execute goal on project monprojet: Could not resolve dependencies 
> for monprojet:jar:1.0: The following artifacts could not be resolved: 
> com.thoughtworks.qdox:qdox:jar:2.0.3 (absent), 
> org.codehaus.plexus:plexus-compiler-manager:jar:2.13.0 (absent), 
> org.codehaus.plexus:plexus-compiler-javac:jar:2.13.0 (absent), 
> org.codehaus.plexus:plexus-interpolation:jar:1.26 (absent), 
> org.codehaus.plexus:plexus-utils:jar:3.5.1 (absent), 
> org.apache.maven.shared:maven-filtering:jar:3.3.1 (absent), 
> org.slf4j:slf4j-api:jar:1.7.36 (absent), 
> org.sonatype.plexus:plexus-build-api:jar:0.0.7 (absent), 
> org.apache.commons:commons-lang3:jar:3.12.0 (absent), 
> org.apache.maven.surefire:surefire-logger-api:jar:3.1.2 (absent), 
> org.apache.maven.surefire:surefire-booter:jar:3.1.2 (absent), 
> org.apache.maven.surefire:surefire-extensions-spi:jar:3.1.2 (absent), 
> org.eclipse.aether:aether-api:jar:1.0.0.v20140518 (absent), 
> commons-codec:commons-codec:jar:1.11 (absent), 
> org.jboss.resteasy:resteasy-jaxb-provider:jar:4.7.9.Final (absent), 
> com.google.errorprone:error_prone_annotations:jar:2.3.4 (absent), 
> com.webauthn4j:webauthn4j-util:jar:0.21.0.RELEASE (absent): Could not 
> transfer artifact com.thoughtworks.qdox:qdox:jar:2.0.3 from/to central 
> (https://repo.maven.apache.org/maven2): status code: 407, reason phrase: 
> authenticationrequired (407) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MWRAPPER-116) Proxy authentification regression

2023-09-04 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MWRAPPER-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761879#comment-17761879
 ] 

Tamas Cservenak commented on MWRAPPER-116:
--

For start try with adding {{-Dmaven.resolver.transport=wagon}} to your Maven 
invocation to see is wagon properly set up (if yes, you probably need to 
migrate, or, stick with Wagon).

> Proxy authentification regression
> -
>
> Key: MWRAPPER-116
> URL: https://issues.apache.org/jira/browse/MWRAPPER-116
> Project: Maven Wrapper
>  Issue Type: Bug
>  Components: Maven Wrapper Scripts
>Affects Versions: 3.2.0
>Reporter: Mathieu CARBONNEAUX
>Priority: Major
>
> when update the wrapper to 3.9.x+ (on 3.8.2 is working fine) i take "status 
> code: 407, reason phrase: authenticationrequired (407)" when he try to 
> download some artifact (not all), espacialy maven plugin 
> (surefire/jar/compile) one.
>  
> {code:java}
> Failed to execute goal on project monprojet: Could not resolve dependencies 
> for monprojet:jar:1.0: The following artifacts could not be resolved: 
> com.thoughtworks.qdox:qdox:jar:2.0.3 (absent), 
> org.codehaus.plexus:plexus-compiler-manager:jar:2.13.0 (absent), 
> org.codehaus.plexus:plexus-compiler-javac:jar:2.13.0 (absent), 
> org.codehaus.plexus:plexus-interpolation:jar:1.26 (absent), 
> org.codehaus.plexus:plexus-utils:jar:3.5.1 (absent), 
> org.apache.maven.shared:maven-filtering:jar:3.3.1 (absent), 
> org.slf4j:slf4j-api:jar:1.7.36 (absent), 
> org.sonatype.plexus:plexus-build-api:jar:0.0.7 (absent), 
> org.apache.commons:commons-lang3:jar:3.12.0 (absent), 
> org.apache.maven.surefire:surefire-logger-api:jar:3.1.2 (absent), 
> org.apache.maven.surefire:surefire-booter:jar:3.1.2 (absent), 
> org.apache.maven.surefire:surefire-extensions-spi:jar:3.1.2 (absent), 
> org.eclipse.aether:aether-api:jar:1.0.0.v20140518 (absent), 
> commons-codec:commons-codec:jar:1.11 (absent), 
> org.jboss.resteasy:resteasy-jaxb-provider:jar:4.7.9.Final (absent), 
> com.google.errorprone:error_prone_annotations:jar:2.3.4 (absent), 
> com.webauthn4j:webauthn4j-util:jar:0.21.0.RELEASE (absent): Could not 
> transfer artifact com.thoughtworks.qdox:qdox:jar:2.0.3 from/to central 
> (https://repo.maven.apache.org/maven2): status code: 407, reason phrase: 
> authenticationrequired (407) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-344) Upgrade Maven to 3.9.4

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-344.
-
Resolution: Fixed

> Upgrade Maven to 3.9.4
> --
>
> Key: MRESOLVER-344
> URL: https://issues.apache.org/jira/browse/MRESOLVER-344
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Ant Tasks
>Reporter: Sylwester Lachiewicz
>Assignee: Sylwester Lachiewicz
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Upgrade to Maven 3.9.4 and Resolver 1.9.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-401) Drop use of SL, up version to 1.5.0

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-401:
--
Summary: Drop use of SL, up version to 1.5.0  (was: Drop use of SL)

> Drop use of SL, up version to 1.5.0
> ---
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-400) Update to parent POM 40, reformat

2023-09-06 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-400:
-

 Summary: Update to parent POM 40, reformat
 Key: MRESOLVER-400
 URL: https://issues.apache.org/jira/browse/MRESOLVER-400
 Project: Maven Resolver
  Issue Type: Task
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


Update parent to POM 40, reformat sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-400) Update to parent POM 40, reformat

2023-09-06 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-400.
-
Resolution: Fixed

> Update to parent POM 40, reformat
> -
>
> Key: MRESOLVER-400
> URL: https://issues.apache.org/jira/browse/MRESOLVER-400
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Update parent to POM 40, reformat sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-401) Drop use of SL

2023-09-06 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-401:
-

 Summary: Drop use of SL
 Key: MRESOLVER-401
 URL: https://issues.apache.org/jira/browse/MRESOLVER-401
 Project: Maven Resolver
  Issue Type: Task
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


Drop use of deprecated SL, switch to supplier (and drop other deprecated uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-321) Resolver while collecting may end up in busy loop without any possibility to be stopped

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-321:
--
Fix Version/s: 2.0.0

> Resolver while collecting may end up in busy loop without any possibility to 
> be stopped
> ---
>
> Key: MRESOLVER-321
> URL: https://issues.apache.org/jira/browse/MRESOLVER-321
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 2.0.0
>
>
> As reported by users, under some conditions (MRESOLVER-316) resolver is able 
> to spin itself into busy loop, without possibility to stop it (ie. by 
> interrupting the thread of it).
> Example report: [https://github.com/apache/maven-resolver/pull/236]
> The reproducer (until MRESOLVER-316 is fixed) is present in PR above (the 
> demo code and the diff for it).
> While the PR proposes a code change, that is not covering all, as in case of 
> MRESOLVER-316 bug, the endless loop happens in visitor bit (DependencyVisitor 
> visiting DependencyNodes).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-405) Get rid of component name string literals, make them constants and reusable

2023-09-08 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-405:
-

 Summary: Get rid of component name string literals, make them 
constants and reusable
 Key: MRESOLVER-405
 URL: https://issues.apache.org/jira/browse/MRESOLVER-405
 Project: Maven Resolver
  Issue Type: Task
  Components: Resolver
Reporter: Tamas Cservenak
 Fix For: 1.9.16


All new components have {{NAME}} constants where appropriate, except for some 
old ones. Fix this, and stop using free string literals, use constants instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MNG-7874) maven-resolver-provider: introduce NAME constants.

2023-09-08 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MNG-7874:


 Summary: maven-resolver-provider: introduce NAME constants.
 Key: MNG-7874
 URL: https://issues.apache.org/jira/browse/MNG-7874
 Project: Maven
  Issue Type: Task
  Components: Core
Reporter: Tamas Cservenak
 Fix For: 3.9.5


Similar as MRESOLVER-405, stop using free string literals, instead introduce 
{{NAME}} constants.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7874) maven-resolver-provider: introduce NAME constants.

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7874:
-
Fix Version/s: 4.0.0-alpha-8

> maven-resolver-provider: introduce NAME constants.
> --
>
> Key: MNG-7874
> URL: https://issues.apache.org/jira/browse/MNG-7874
> Project: Maven
>  Issue Type: Task
>  Components: Core
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> Similar as MRESOLVER-405, stop using free string literals, instead introduce 
> {{NAME}} constants.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MNG-7873) Export missing Xpp3DomBuilder

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MNG-7873:
-
Component/s: Core

> Export missing Xpp3DomBuilder
> -
>
> Key: MNG-7873
> URL: https://issues.apache.org/jira/browse/MNG-7873
> Project: Maven
>  Issue Type: Bug
>  Components: Core
>Reporter: Guillaume Nodet
>Priority: Major
> Fix For: 3.9.5
>
>
> See https://lists.apache.org/thread/ltd1g1dbv0lqqdw5q941gmrkfyn6m87m



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-169) Drop use of ServiceLocator

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-169.
-
Resolution: Duplicate

Already happened as part of MRESOLVER-401

> Drop use of ServiceLocator
> --
>
> Key: MRESOLVER-169
> URL: https://issues.apache.org/jira/browse/MRESOLVER-169
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: 2.0.0
>
>
> ServiceLocator is bad pattern anyway, and is deprecated (and planned to be 
> removed completely).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-169) Drop use of ServiceLocator

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-169:
--
Fix Version/s: (was: 2.0.0)

> Drop use of ServiceLocator
> --
>
> Key: MRESOLVER-169
> URL: https://issues.apache.org/jira/browse/MRESOLVER-169
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
>
> ServiceLocator is bad pattern anyway, and is deprecated (and planned to be 
> removed completely).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MRESOLVER-288) Improve Ant task support for dependency management

2023-09-08 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17763056#comment-17763056
 ] 

Tamas Cservenak commented on MRESOLVER-288:
---

Happened as part of MRESOLVER-402

> Improve Ant task support for dependency management
> --
>
> Key: MRESOLVER-288
> URL: https://issues.apache.org/jira/browse/MRESOLVER-288
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Affects Versions: ant-tasks-1.4.0
>Reporter: Piotr Karwasz
>Priority: Minor
>
> Ant tasks supports dependency management only for direct dependencies (due to 
> Maven Model Builder), while there is no support for managed transitive 
> dependencies.
> I submitted a [Github 
> PR|https://github.com/apache/maven-resolver-ant-tasks/pull/15] that adds 
> support for dependency management whenever the dependency data come from a 
> POM file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-288) Improve Ant task support for dependency management

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-288.
-
Resolution: Duplicate

> Improve Ant task support for dependency management
> --
>
> Key: MRESOLVER-288
> URL: https://issues.apache.org/jira/browse/MRESOLVER-288
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Affects Versions: ant-tasks-1.4.0
>Reporter: Piotr Karwasz
>Priority: Minor
>
> Ant tasks supports dependency management only for direct dependencies (due to 
> Maven Model Builder), while there is no support for managed transitive 
> dependencies.
> I submitted a [Github 
> PR|https://github.com/apache/maven-resolver-ant-tasks/pull/15] that adds 
> support for dependency management whenever the dependency data come from a 
> POM file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-405) Get rid of component name string literals, make them constants and reusable

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-405.
-
  Assignee: Tamas Cservenak
Resolution: Fixed

> Get rid of component name string literals, make them constants and reusable
> ---
>
> Key: MRESOLVER-405
> URL: https://issues.apache.org/jira/browse/MRESOLVER-405
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.16
>
>
> All new components have {{NAME}} constants where appropriate, except for some 
> old ones. Fix this, and stop using free string literals, use constants 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNG-7874) maven-resolver-provider: introduce NAME constants.

2023-09-08 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7874.

  Assignee: Tamas Cservenak
Resolution: Fixed

master 
https://github.com/apache/maven/commit/0ea8879eeacbb21a06fd91eb01cb039d03634f38
maven-3.9.x 
https://github.com/apache/maven/commit/bbd84c6c87ff4812f87ad42c29ad4c06e56b4a49

> maven-resolver-provider: introduce NAME constants.
> --
>
> Key: MNG-7874
> URL: https://issues.apache.org/jira/browse/MNG-7874
> Project: Maven
>  Issue Type: Task
>  Components: Core
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> Similar as MRESOLVER-405, stop using free string literals, instead introduce 
> {{NAME}} constants.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MDEPLOY-312) [REGRESSION] deploy no longer updates project model

2023-10-13 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MDEPLOY-312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775036#comment-17775036
 ] 

Tamas Cservenak commented on MDEPLOY-312:
-

THIS is very cool! Thanks! Will dive into that a bit, as I was also tinkering 
about several things (one is mentioned on some other JIRA, basically stop using 
"snapshot normalization" [rename downloaded timestamped snapshot files into 
-SNAPSHOT files, effectively making all local repository content immutable]), 
and this "warming up" the local repository

> [REGRESSION] deploy no longer updates project model
> ---
>
> Key: MDEPLOY-312
> URL: https://issues.apache.org/jira/browse/MDEPLOY-312
> Project: Maven Deploy Plugin
>  Issue Type: Bug
>  Components: deploy:deploy, deploy:deploy-file
>Affects Versions: 3.0.0
>Reporter: Jared Stehler
>Priority: Major
>
> Prior to 3.0.0, the maven-deploy-plugin would update artifacts on the Project 
> model:
>  * 
> [https://github.com/apache/maven/blob/master/maven-compat/src/main/java/org/apache/maven/artifact/deployer/DefaultArtifactDeployer.java#L147]
>  * 
> [https://github.com/apache/maven-deploy-plugin/blob/maven-deploy-plugin-2.8.2/src/main/java/org/apache/maven/plugin/deploy/DeployFileMojo.java#L276]
> This is no longer occurring with the migration to maven-resolver, which is 
> breaking our downstream plugins relying on the resolved SNAPSHOT version.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >