Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7929#discussion_r36174915
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala ---
    @@ -62,6 +63,39 @@ private[hive] class ClientWrapper(
       extends ClientInterface
       with Logging {
     
    +  overrideHadoopShims()
    +
    +  // !! HACK ALERT !!
    +  //
    +  // This method is used to workaround CDH Hadoop versions like 
2.0.0-mr1-cdh4.1.1.
    +  //
    +  // Internally, Hive `ShimLoader` tries to load different versions of 
Hadoop shims by checking
    +  // version information gathered from Hadoop jar files.  If the major 
version number is 1,
    +  // `Hadoop20SShims` will be loaded.  Otherwise, if the major version 
number is 2, `Hadoop23Shims`
    +  // will be chosen.  However, CDH Hadoop versions like 2.0.0-mr1-cdh4.1.1 
have 2 as major version
    +  // number, but contain Hadoop 1 code.  This confuses Hive `ShimLoader` 
and loads wrong version of
    --- End diff --
    
    If you look at Hadoop releases, you'll find 
`getPathWithoutSchemeAndAuthority` in Hadoop 2.1, but not Hadoop 2.0. So, the 
CDH release is "correct", and this comment is incorrect. I think the reason 
point is just that this isn't the correct condition to check to detect Hadoop 
2.0, if that's really the intent. 
    
    However it seems like there are bigger problems if Hadoop 2.0 code is 
activated by looking for Hadoop 1 code? and failing that, Hadoop 2.3 code is 
used for any 2.x release? maybe there's just a naming issue here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to