[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503814#comment-14503814
 ] 

Sangjin Lee commented on HADOOP-11656:
--------------------------------------

Thanks [~busbey] for the proposal! I have some preliminary high level comments, 
in no particular order:

If we're going the route of shading for clients, IMO there is less incentive to 
use a different mechanism on the framework side; what would be a reason not to 
consider shading on the framework side if we're shading for the client? I think 
it would be great to provide the same type of solutions for both the client 
side and the framework side, and that would simplify things a lot for users. 
Also, note that the build side of things would bring those two aspects together 
anyway (see below).

The name "hadoop-client-reactor" is rather awkward as the reactor has a 
specific meaning in programming, and this is not that.

bq. Classes labeled InterfaceAudience.Public might inadvertently leak 
references to third-party libraries. Doing so will substantially complicate 
isolating things in the client-api.
IMO this is the most significant risk with shading; i.e. no shaded types can 
ever leak to users. Granted, these issues do exist in other solutions including 
OSGi, but there might be ways to solve them in those solutions. With shading it 
is a hard failure. There might be other known issues with shading. I think 
Cloudera should have experience with shaded libraries, and could provide more 
details in terms of issues with shading?

bq. Unfortunately, it doesn't provide much upgrade help for applications that 
rely on the classes found in the fallback case.
Could you please elaborate on this point? Do you mean things will break if user 
code relied on a Hadoop dependency implicitly (without bringing their own copy) 
and Hadoop upgraded it to an incompatible version? Note that this type of 
issues may exist with the OSGi approach as well. If OSGi exported that 
particular dependency, then the user would start relying on that dependency 
implicitly too unless he/she brings the dependency. And in that case, if Hadoop 
upgraded that dependency, the user code will break in the same manner.

If Hadoop does not intent to support that use case, OSGi does allow the 
possibility of not exporting these dependencies, in which case the user code 
will simply break right from the beginning until the user fixes it so they 
bring the dependency.

bq. Downstream user-provided code must not be required to be an OSGi bundle.
+1. To me this is the only viable approach. The user code (and its 
dependencies) needs to be converted into a fat bundle dynamically at runtime. 
You might want to look at what Apache Geronimo did with regards to this.

The only caveat is what the underlying system bundles (Hadoop+system) should 
export. If we're going to use OSGi, I think we should only export the actual 
public APIs and types the user code can couple to. The implication of that 
decision is that things will fail miserably if any of the implicit dependencies 
is missing from the user code, and we'd spend a lot of time tracking down 
missing dependencies for users. Trust me, this is non-trivial support cost.

I haven't thought through this completely, but we do need to think about the 
impact on user builds. To create their app (e.g. MR app), what maven artifacts 
would they need to depend on? Note that users usually have a single project for 
their client as well as the code that's executed on the cluster. Do we 
anticipate any changes users are required to make (e.g. clean up their 3rd 
party dependencies, etc.)? Although in theory everyone should have a clean pom, 
etc. etc., sadly the reality is very different, and we need to be able to tell 
users what is needed before they can start leveraging this.

> Classpath isolation for downstream clients
> ------------------------------------------
>
>                 Key: HADOOP-11656
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11656
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
>              Labels: classloading, classpath, dependencies, scripts, shell
>         Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to