LuciferYang commented on code in PR #44343:
URL: https://github.com/apache/spark/pull/44343#discussion_r1426665614
##########
common/utils/src/main/scala/org/apache/spark/util/MavenUtils.scala:
##########
@@ -454,9 +465,23 @@ private[spark] object MavenUtils extends Logging {
md.addExcludeRule(createExclusion(e + ":*", ivySettings,
ivyConfName))
}
// resolve dependencies
- val rr: ResolveReport = ivy.resolve(md, resolveOptions)
+ var rr: ResolveReport = ivy.resolve(md, resolveOptions)
if (rr.hasError) {
- throw new RuntimeException(rr.getAllProblemMessages.toString)
+ // SPARK-46302: When there are some corrupted jars in the maven repo,
+ // we try to continue without the cache
+ val failedReports = rr.getArtifactsReports(DownloadStatus.FAILED,
true)
Review Comment:
My concern is that if the submission machine has a .m2 dir, it will most
likely enter the retry process of without local-m2-cache because the
local-m2-cache produced from building spark distribution also has this
[issues](https://github.com/apache/spark/pull/44208#pullrequestreview-1777666633)
now.
Perhaps it's difficult to obtain a perfect local-m2-cache now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]