[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272368#comment-16272368 ]
liyunzhang commented on SPARK-22660: ------------------------------------ [~srowen]: one thing is very confused, I don't change the java.version in the pom.xml, but only change {{$JAVA_HOME}} to jdk9 directory and it seems use jdk9 to compile and throws exceptions about jdk9. > Compile with scala-2.12 and JDK9 > -------------------------------- > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build > Affects Versions: 2.2.0 > Reporter: liyunzhang > Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error] ^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org