Hi Abe,
thank you very much for surfacing the question. I think that there is a several 
twists to it, so my apologies as this will be a long answer :)

When we’ve started working on Sqoop 2 few years back, we’ve intentionally 
pushed the Hadoop dependency as far from shared libraries as possible. The 
intention was that no component in common or core should be depending nor use 
any Hadoop APIs and those should be isolated to separate modules 
(execution/submission engine). The reason for that is that Hadoop doesn’t have 
particularly good track of keeping backward compatibility and it has bitten a 
lot of projects in the past. For example every single project that I know of 
that is using MR needs to have a shim layer that is dealing with the API 
differences (Pig [1], Hive [2], …) . The only exception to this that I’m aware 
of is Sqoop 1, where we did not had to introduce shims is only because we 
(shamelessly) copied code from Hadoop to our own code base. Nevertheless we 
have places where we had to do that detection nevertheless [3]. I’m sure that 
Hadoop is getting better as the project matures, but I would still advise being 
careful of using various Hadoop APIs and limit that usage to the extend needed. 
There will be obviously situations where we want to use Hadoop API to make our 
life simpler, such as reusing their security implementation and that will be 
hopefully fine.

Whereas we can be pretty sure that Sqoop Server will have Hadoop libraries on 
the class-path and the concern there was more about introducing backward 
incompatible changes that is hopefully less important nowadays, not introducing 
Hadoop dependency on client side had a different reason. Hadoop common is quite 
important jar that have huge number of dependencies - check out the list at 
it’s pom file [4]. This is a problem because the Sqoop client is meant to be 
small and easily reusable wheres depending on Hadoop will force the application 
developer to certain library versions that are dictated by Hadoop (like guava, 
commons-*). And that forces people to do various weird things such as using 
custom class loaders to isolate those libraries from main application and 
making the situation in most cases even worst, because Hadoop libraries assumes 
“ownership” of the underlaying JVM and run a lot of eternal threads per 
class-loader. Hence I would advise being double careful when introducing 
dependency on Hadoop (common) for our client.

I’m wondering what we’re trying to achieve by moving the dependency from 
“provided” to “compile”? Do we want to just ensure that it’s always on the 
Server side or is the intent to get it to the client?

Jarcec

Links:
1: https://github.com/apache/pig/tree/trunk/shims/src
2: https://github.com/apache/hive/tree/trunk/shims
3: 
https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/mapreduce/hcat/SqoopHCatUtilities.java#L962
4: 
http://search.maven.org/#artifactdetails%7Corg.apache.hadoop%7Chadoop-common%7C2.6.0%7Cjar

> On Dec 10, 2014, at 7:56 AM, Abraham Elmahrek <[email protected]> wrote:
> 
> Hey guys,
> 
> With the work being done in Sqoop2 involving authentication, there are a
> few classes that are being used from hadoop auth and eventually hadoop
> common.
> 
> I'd like to gauge how folks feel about including the hadoop libraries as a
> "compile" time dependency rather than "provided". The reasons being:
> 
>   1. Hadoop maintains wire compatibility within a major version:
>   
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Wire_compatibility
>   2. UserGroupInformation and other useful interfaces are marked as
>   "Evolving" or "Unstable":
>   
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/InterfaceClassification.html
>   .
> 
> I've been looking around and it seems most projects include Hadoop as a
> compile time dependency:
> 
>   1. Kite -
>   
> https://github.com/kite-sdk/kite/blob/master/kite-hadoop-dependencies/cdh5/pom.xml
>   2. Flume - https://github.com/apache/flume/blob/trunk/pom.xml
>   3. Oozie - https://github.com/apache/oozie/tree/master/hadooplibs
>   4. hive - https://github.com/apache/hive/blob/trunk/pom.xml#L1067
> 
> IMO wire compatibility is easier to maintain than Java API compatibility.
> There may be features in future Hadoop releases that we'll want to use on
> the security side as well.
> 
> -Abe

Reply via email to