[ 
https://issues.apache.org/jira/browse/SPARK-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14001418#comment-14001418
 ] 

Sean Owen commented on SPARK-1875:
----------------------------------

Here's my recap of what I understand:

- All the original changes about `commons-lang` were to ensure that Spark 
itself uses `commons-lang3` and declares the dependency. That's OK and not 
related.
- The PR https://github.com/apache/spark/pull/754 was intended to mirror my 
changes in https://github.com/apache/spark/pull/746/files for SBT, but I didn't 
look closely enough: it actually also excludes `commons-lang`
- I don't know of a reason we need to deal with `commons-lang` directly. It 
*should* exist in the built assembly since dependencies need it. The version 
resolution among various versions of 2.x is all fine AFAICT, so we don't need 
(or do) any manual version setting.

In short I do not see why `commons-lang` is excluded? I think this exclusion 
should simply be reverted.

> NoClassDefFoundError: StringUtils when building against Hadoop 1
> ----------------------------------------------------------------
>
>                 Key: SPARK-1875
>                 URL: https://issues.apache.org/jira/browse/SPARK-1875
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Matei Zaharia
>            Assignee: Guoqiang Li
>            Priority: Blocker
>             Fix For: 1.0.0
>
>
> Maybe I missed something, but after building an assembly with Hadoop 1.2.1 
> and Hive enabled, if I go into it and run spark-shell, I get this:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
>       at 
> org.apache.hadoop.metrics2.lib.MetricMutableStat.<init>(MetricMutableStat.java:59)
>       at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:75)
>       at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:120)
>       at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>       at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>       at 
> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>       at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:216)
>       at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:184)
>       at 
> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:236)
>       at 
> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79)
>       at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
>       at 
> org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:226)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to