[
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756033#comment-16756033
]
Steve Loughran commented on HADOOP-15387:
-----------------------------------------
I actually want to keep out the AWS SDK as it is already shaded. Google GCS has
a shaded artifact too.
What I'm trying to avoid is having all those hadoop-common dependencies surface
as requirements, and things needed by hadoop-azure-datalake, hadoop-azure, etc.
Now, HADOOP-16080 highlights a variant problem: we use (and continue to use)
things in hadoop-common which aren't in hadoop-client API. That's an
interesting complication. As far as the object stores are concerned,
hadoop-common is where we put the common classes, because that's how they are
shared across otherwise isolated implementations. Not sure what to do there.
> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -----------------------------------------------------------------
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
> Affects Versions: 3.1.0
> Reporter: Steve Loughran
> Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
> * Hadoop dependency choices don't control their decisions
> * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are
> happy
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]