[
https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-22660:
------------------------------
Summary: se position() and limit() to fix ambiguity issue in scala-2.12
(was: Compile with scala-2.12 and JDK9)
> se position() and limit() to fix ambiguity issue in scala-2.12
> --------------------------------------------------------------
>
> Key: SPARK-22660
> URL: https://issues.apache.org/jira/browse/SPARK-22660
> Project: Spark
> Issue Type: Improvement
> Components: Build
> Affects Versions: 2.2.0
> Reporter: liyunzhang
> Priority: Minor
>
> build with scala-2.12 with following steps
> 1. change the pom.xml with scala-2.12
> ./dev/change-scala-version.sh 2.12
> 2.build with -Pscala-2.12
> for hive on spark
> {code}
> ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn
> -Pparquet-provided -Dhadoop.version=2.7.3
> {code}
> for spark sql
> {code}
> ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Phive
> -Dhadoop.version=2.7.3>log.sparksql 2>&1
> {code}
> get following error
> #Error1
> {code}
> /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172:
> error: cannot find symbol
> Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory));
> {code}
> This is because sun.misc.Cleaner has been moved to new location in JDK9.
> HADOOP-12760 will be the long term fix
> #Error2
> {code}
> spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455:
> ambiguous reference to overloaded definition, method limit in class
> ByteBuffer of type (x$1: Int)java.nio.ByteBuffer
> method limit in class Buffer of type ()Int
> match expected type ?
> val resultSize = serializedDirectResult.limit
> error
> {code}
> The limit method was moved from ByteBuffer to the superclass Buffer and it
> can no longer be called without (). The same reason for position method.
> #Error3
> {code}
> home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415:
> ambiguous reference to overloaded definition, [error] both method putAll in
> class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method
> putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <:
> Object])Unit [error] match argument types (java.util.Map[String,String])
> [error] properties.putAll(propsMap.asJava)
> [error] ^
> [error]
> /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427:
> ambiguous reference to overloaded definition, [error] both method putAll in
> class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method
> putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <:
> Object])Unit [error] match argument types (java.util.Map[String,String])
> [error] props.putAll(outputSerdeProps.toMap.asJava)
> [error] ^
> {code}
> This is because the key type is Object instead of String which is unsafe.
> After solving these 3 errors, compile successfully.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]