Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7929#discussion_r36210157
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala ---
    @@ -62,6 +63,39 @@ private[hive] class ClientWrapper(
       extends ClientInterface
       with Logging {
     
    +  overrideHadoopShims()
    +
    +  // !! HACK ALERT !!
    +  //
    +  // This method is used to workaround CDH Hadoop versions like 
2.0.0-mr1-cdh4.1.1.
    +  //
    +  // Internally, Hive `ShimLoader` tries to load different versions of 
Hadoop shims by checking
    +  // version information gathered from Hadoop jar files.  If the major 
version number is 1,
    +  // `Hadoop20SShims` will be loaded.  Otherwise, if the major version 
number is 2, `Hadoop23Shims`
    +  // will be chosen.  However, CDH Hadoop versions like 2.0.0-mr1-cdh4.1.1 
have 2 as major version
    +  // number, but contain Hadoop 1 code.  This confuses Hive `ShimLoader` 
and loads wrong version of
    --- End diff --
    
    I think it's best to think of Hadoop 2.2 as the first stable 2.x release, 
since in fact, 2.0.x was intended to be the "2.x alpha" line and 2.1 was the 
"2.x beta" line. So a number of APIs were in flux between 1.x until 2.2. (We 
won't get into the 0.x releases) 
    
    In Spark, the `hadoop-1` profile is really a "< 2.2" profile. At least, 
there's been no need to distinguish 2.0 and 2.1 separately. 
    
    `mr1` means "MapReduce v1". It's a release that has more in common with 
Hadoop 1 -- like, using the `hadoop-core` not `hadoop-common` module -- 
including the classic pre-YARN stable MapReduce implementation. The non-`mr1` 
version has YARN-based MR and is more Hadoop 2-like. It was and is confusing; 
it's not a variant of Hadoop as much as different packagings of old and new 
Hadoop modules during the transition period. Things are much simpler from 2.2 
onwards when all that was done with.
    
    I'm sure there's a problem to solve here, no doubt, just making sure the 
version testing does what it means to, and that the version shims are labeled 
accurately.
    
    So... I don't know if this helps or not. I can tell you that testing for 
this method is roughly like testing for "Hadoop 2.1".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to