Qinghe Jin created SPARK-21676:
----------------------------------

             Summary: cannot compile on hadoop 2.2.0 and hive
                 Key: SPARK-21676
                 URL: https://issues.apache.org/jira/browse/SPARK-21676
             Project: Spark
          Issue Type: Bug
          Components: Build
    Affects Versions: 2.2.0
         Environment: centos6, java8, maven3.5.0, hadoop2.2, hive-0.12.0

            Reporter: Qinghe Jin



Using following command to compile:

“./make-distribution.sh --tgz -Phadoop-2.2 -Pyarn -Phive -Dhadoop.version=2.2.0”

Then get the following output:

>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:149:
>>> value getThreadStatistics is not a member of 
>>>org.apache.hadoop.fs.FileSystem.Statistics
>>>[error]     val f = () => 
>>>FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum
>>>[error]                                                             ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:149:
>>> ambiguous implicit values:
>>>[error]  both object BigIntIsIntegral in object Numeric of type 
>>>scala.math.Numeric.BigIntIsIntegral.type
>>>[error]  and object ShortIsIntegral in object Numeric of type 
>>>scala.math.Numeric.ShortIsIntegral.type
>>>[error]  match expected type Numeric[B]
>>>[error]     val f = () => 
>>>FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum
>>>[error]                                                                      
>>>                         ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:166:
>>> could not find implicit value for parameter num: Numeric[(Nothing, Nothing)]
>>>[error]           }.sum
>>>[error]             ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:180:
>>> value getThreadStatistics is not a member of 
>>>org.apache.hadoop.fs.FileSystem.Statistics
>>>[error]     val threadStats = 
>>>FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics)
>>>[error]                                                                 ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:181:
>>> value getBytesWritten is not a member of Nothing
>>>[error]     val f = () => threadStats.map(_.getBytesWritten).sum
>>>[error]                                     ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:181:
>>> ambiguous implicit values:
>>>[error]  both object BigIntIsIntegral in object Numeric of type 
>>>scala.math.Numeric.BigIntIsIntegral.type
>>>[error]  and object ShortIsIntegral in object Numeric of type 
>>>scala.math.Numeric.ShortIsIntegral.type
>>>[error]  match expected type Numeric[B]
>>>[error]     val f = () => threadStats.map(_.getBytesWritten).sum
>>>[error]                                                      ^
>>>[WARNING] unused-1.0.0.jar, spark-network-common_2.11-2.2.0.jar, 
>>>spark-network-shuffle_2.11-2.2.0.jar, spark-tags_2.11-2.2.0.jar define 1 
>>>overlapping classes:
>>>[WARNING]   - org.apache.spark.unused.UnusedStubClass
>>>[WARNING] maven-shade-plugin has detected that some class files are
>>>[WARNING] present in two or more JARs. When this happens, only one
>>>[WARNING] single version of the class is copied to the uber jar.
>>>[WARNING] Usually this is not harmful and you can skip these warnings,
>>>[WARNING] otherwise try to manually exclude artifacts based on
>>>[WARNING] mvn dependency:tree -Ddetail=true and the above output.
>>>[WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/
>>>[INFO]
>>>[INFO] --- maven-source-plugin:3.0.1:jar-no-fork (create-source-jar) @ 
>>>spark-network-yarn_2.11 ---
>>>[INFO] Building jar: 
>>>SPARK_DIR/spark-2.2.0/common/network-yarn/target/spark-network-yarn_2.11-2.2.0-sources.jar
>>>[INFO]
>>>[INFO] --- maven-source-plugin:3.0.1:test-jar-no-fork (create-source-jar) @ 
>>>spark-network-yarn_2.11 ---
>>>[INFO] Building jar: 
>>>SPARK_DIR/spark-2.2.0/common/network-yarn/target/spark-network-yarn_2.11-2.2.0-test-sources.jar
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala:324:
>>> not found: type InputSplitWithLocationInfo
>>>[error]       case lsplit: InputSplitWithLocationInfo =>
>>>[error]                    ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala:403:
>>> not found: type SplitLocationInfo
>>>[error]        infos: Array[SplitLocationInfo]): Option[Seq[String]] = {
>>>[error]                     ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala:325:
>>> value getLocationInfo is not a member of org.apache.hadoop.mapred.InputSplit
>>>[error]         HadoopRDD.convertSplitLocationInfo(lsplit.getLocationInfo)
>>>[error]                                                   ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala:404:
>>> type mismatch;
>>>[error]  found   : x$2.type (with underlying type Array[<error>])
>>>[error]  required: ?{def flatMap: ?}
>>>[error] Note that implicit conversions are not applicable because they are 
>>>ambiguous:
>>>[error]  both method booleanArrayOps in object Predef of type (xs: 
>>>Array[Boolean])scala.collection.mutable.ArrayOps[Boolean]
>>>[error]  and method byteArrayOps in object Predef of type (xs: 
>>>Array[Byte])scala.collection.mutable.ArrayOps[Byte]
>>>[error]  are possible conversion functions from x$2.type to ?{def flatMap: ?}
>>>[error]     Option(infos).map(_.flatMap { loc =>
>>>[error]                       ^
>>>[error] 
>>>SPARK_DIR/spark-2.2.0/core/src/main/scala/org/apache/spark/rdd/NewHadoopRDD.scala:282:
>>> value getLocationInfo is not a member of 
>>>org.apache.hadoop.mapreduce.InputSplit with org.apache.hadoop.io.Writable 
>>>[error]     val locs = 
>>>HadoopRDD.convertSplitLocationInfo(split.getLocationInfo)
>>>[error]                                                         ^
>>>[error] 11 errors found
>>>[error] Compile failed at Aug 9, 2017 7:42:39 PM [14.558s]
>>>[INFO] 
>>>------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Spark Project Parent POM ........................... SUCCESS [  2.381 
>>>s]
>>>[INFO] Spark Project Tags ................................. SUCCESS [  3.072 
>>>s]
>>>[INFO] Spark Project Sketch ............................... SUCCESS [  5.196 
>>>s]
>>>[INFO] Spark Project Networking ........................... SUCCESS [ 11.149 
>>>s]
>>>[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  4.893 
>>>s]
>>>[INFO] Spark Project Unsafe ............................... SUCCESS [  9.325 
>>>s]
>>>[INFO] Spark Project Launcher ............................. SUCCESS [  9.934 
>>>s]
>>>[INFO] Spark Project Core ................................. FAILURE [ 17.150 
>>>s]
>>>[INFO] Spark Project ML Local Library ..................... SUCCESS [ 10.325 
>>>s]
>>>[INFO] Spark Project GraphX ............................... SKIPPED
>>>[INFO] Spark Project Streaming ............................ SKIPPED
>>>[INFO] Spark Project Catalyst ............................. SKIPPED
>>>[INFO] Spark Project SQL .................................. SKIPPED
>>>[INFO] Spark Project ML Library ........................... SKIPPED
>>>[INFO] Spark Project Tools ................................ SUCCESS [  1.461 
>>>s]
>>>[INFO] Spark Project Hive ................................. SKIPPED
>>>[INFO] Spark Project REPL ................................. SKIPPED
>>>[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  7.797 
>>>s]
>>>[INFO] Spark Project YARN ................................. SKIPPED
>>>[INFO] Spark Project Assembly ............................. SKIPPED
>>>[INFO] Spark Project External Flume Sink .................. SUCCESS [  6.742 
>>>s]
>>>[INFO] Spark Project External Flume ....................... SKIPPED
>>>[INFO] Spark Project External Flume Assembly .............. SKIPPED
>>>[INFO] Spark Integration for Kafka 0.8 .................... SKIPPED
>>>[INFO] Kafka 0.10 Source for Structured Streaming ......... SKIPPED
>>>[INFO] Spark Project Examples ............................. SKIPPED
>>>[INFO] Spark Project External Kafka Assembly .............. SKIPPED
>>>[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
>>>[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
>>>[INFO] 
>>>------------------------------------------------------------------------
>>>[INFO] BUILD FAILURE
>>>[INFO] 
>>>------------------------------------------------------------------------
>>>[INFO] Total time: 39.133 s (Wall Clock)
>>>[INFO] Finished at: 2017-08-09T19:42:39+08:00
>>>[INFO] Final Memory: 48M/2011M
>>>[INFO] 
>>>------------------------------------------------------------------------
>>>[WARNING] The requested profile "hadoop-2.2" could not be activated because 
>>>it does not exist.
>>>[ERROR] Failed to execute goal 
>>>net.alchim31.maven:scala-maven-plugin:3.2.2:compile (scala-compile-first) on 
>>>project spark-core_2.11: Execution scala-compile-first of goal 
>>>net.alchim31.maven:scala-maven-plugin:3.2.2:compile failed.: CompileFailed 
>>>-> [Help 1]
>>>[ERROR]
>>>[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
>>>switch.
>>>[ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>>[ERROR]
>>>[ERROR] For more information about the errors and possible solutions, please 
>>>read the following articles:
>>>[ERROR] [Help 1] 
>>>http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
>>>[ERROR]
>>>[ERROR] After correcting the problems, you can resume the build with the 
>>>command
>>>[ERROR]   mvn <goals> -rf :spark-core_2.11





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to