[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279208#comment-16279208 ] liyunzhang commented on SPARK-22660: [~srowen]:I have seen the comments in the pr, will fix them soon. thanks for your remind! > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > for hive on spark > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > {code} > for spark sql > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Phive > -Dhadoop.version=2.7.3>log.sparksql 2>&1 > {code} > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278310#comment-16278310 ] Sean Owen commented on SPARK-22660: --- Please fix your pull request before doing anything else here. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > for hive on spark > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > {code} > for spark sql > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Phive > -Dhadoop.version=2.7.3>log.sparksql 2>&1 > {code} > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278064#comment-16278064 ] liyunzhang commented on SPARK-22660: Ok,create SPARK-22687 to record the problem about runtime. {quote} But here you are already Hadoop 2 won't work with Java 9. {quote} sorry for not describing clearly, here the hadoop is hadoop-3.0.0 which is enabled by jdk9(HADOOP-14984, HADOOP-14978) > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > for hive on spark > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > {code} > for spark sql > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Phive > -Dhadoop.version=2.7.3>log.sparksql 2>&1 > {code} > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276583#comment-16276583 ] Sean Owen commented on SPARK-22660: --- You keep changing what this JIRA is about . There are too many JDK 9 issues for one. Please change this to match the scope of the PR you opened. After that identify another logical change or fix. But here you are already Hadoop 2 won't work with Java 9. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > for hive on spark > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > {code} > for spark sql > {code} > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn -Phive > -Dhadoop.version=2.7.3>log.sparksql 2>&1 > {code} > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276383#comment-16276383 ] liyunzhang commented on SPARK-22660: when running spark sql on the above package, exception is thrown out {code} [root@bdpe41 spark-2.3.0-SNAPSHOT-bin-2.7.3]# ./bin/spark-shell spark-2.3.0-SNAPSHOT-bin-2.7. ^C[root@bdpe41 spark-2.3.0-SNAPSHOT-bin-2.7.3]# ./bin/spark-shell --driver-memory 1G WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/zly/spark-2.3.0-SNAPSHOT-bin-2.7.3/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 2017-12-05 03:03:23,511 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Welcome to __ / __/__ ___ _/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.3.0-SNAPSHOT /_/ Using Scala version 2.12.4 (Java HotSpot(TM) 64-Bit Server VM, Java 9.0.1) Type in expressions to have them evaluated. Type :help for more information. scala> Spark context Web UI available at http://bdpe41:4040 Spark context available as 'sc' (master = local[*], app id = local-1512414208378). Spark session available as 'spark'. val sqlContext = new org.apache.spark.sql.SQLContext(sc) warning: there was one deprecation warning (since 2.0.0); for details, enable `:setting -deprecation' or `:replay -deprecation' sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@8da0e54 scala> import sqlContext.implicits._ import sqlContext.implicits._ scala> case class Customer(customer_id: Int, name: String, city: String, state: String, zip_code: String) defined class Customer scala> val dfCustomers = sc.textFile("/home/zly/spark-2.3.0-SNAPSHOT-bin-2.7.3/customers.txt").map(_.split(",")).map(p => Customer(p(0).trim.toInt, p(1), p(2), p(3), p(4))).toDF() 2017-12-05 03:04:02,647 WARN util.ClosureCleaner: Expected a closure; got org.apache.spark.SparkContext$$Lambda$2237/371823738 2017-12-05 03:04:02,649 WARN util.ClosureCleaner: Expected a closure; got org.apache.spark.SparkContext$$Lambda$2242/539107678 2017-12-05 03:04:02,651 WARN util.ClosureCleaner: Expected a closure; got $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$Lambda$2245/345086812 2017-12-05 03:04:02,654 WARN util.ClosureCleaner: Expected a closure; got $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$Lambda$2246/1829622584 2017-12-05 03:04:03,861 WARN metadata.Hive: Failed to access metastore. This class should not accessed in runtime. org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174) at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:166) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180) at org.apache.spark.sql.hive.client.HiveClientImpl.(HiveClientImpl.scala:114) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:383) at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287) at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66) at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65) at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$databaseExists$1(HiveExternalCatalog.scala:195) at
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272389#comment-16272389 ] liyunzhang commented on SPARK-22660: change java.version to 9 in the pom.xml to build , luckily there is no other problems. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272382#comment-16272382 ] Sean Owen commented on SPARK-22660: --- It just means you're cross-compiling for Java 8 using Java 9. I don't think it's what you intend, so set java.version to 9. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272368#comment-16272368 ] liyunzhang commented on SPARK-22660: [~srowen]: one thing is very confused, I don't change the java.version in the pom.xml, but only change {{$JAVA_HOME}} to jdk9 directory and it seems use jdk9 to compile and throws exceptions about jdk9. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. > After solving these 3 errors, compile successfully. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-22660) Compile with scala-2.12 and JDK9
[ https://issues.apache.org/jira/browse/SPARK-22660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272344#comment-16272344 ] liyunzhang commented on SPARK-22660: ok, will put all the modifications about scala-2.12 and jdk9(SPARK-22660,SPARK-22659,SPARK-22661) in this jira. > Compile with scala-2.12 and JDK9 > > > Key: SPARK-22660 > URL: https://issues.apache.org/jira/browse/SPARK-22660 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 2.2.0 >Reporter: liyunzhang >Priority: Minor > > build with scala-2.12 with following steps > 1. change the pom.xml with scala-2.12 > ./dev/change-scala-version.sh 2.12 > 2.build with -Pscala-2.12 > ./dev/make-distribution.sh --tgz -Pscala-2.12 -Phadoop-2.7 -Pyarn > -Pparquet-provided -Dhadoop.version=2.7.3 > get following error > #Error1 > {code} > /common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java:172: > error: cannot find symbol > Cleaner cleaner = Cleaner.create(buffer, () -> freeMemory(memory)); > {code} > This is because sun.misc.Cleaner has been moved to new location in JDK9. > HADOOP-12760 will be the long term fix > #Error2 > {code} > spark_source/core/src/main/scala/org/apache/spark/executor/Executor.scala:455: > ambiguous reference to overloaded definition, method limit in class > ByteBuffer of type (x$1: Int)java.nio.ByteBuffer > method limit in class Buffer of type ()Int > match expected type ? > val resultSize = serializedDirectResult.limit > error > {code} > The limit method was moved from ByteBuffer to the superclass Buffer and it > can no longer be called without (). The same reason for position method. > #Error3 > {code} > home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:415: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] properties.putAll(propsMap.asJava) > [error]^ > [error] > /home/zly/prj/oss/jdk9_HOS_SOURCE/spark_source/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala:427: > ambiguous reference to overloaded definition, [error] both method putAll in > class Properties of type (x$1: java.util.Map[_, _])Unit [error] and method > putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: > Object])Unit [error] match argument types (java.util.Map[String,String]) > [error] props.putAll(outputSerdeProps.toMap.asJava) > [error] ^ > {code} > This is because the key type is Object instead of String which is unsafe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org