[ https://issues.apache.org/jira/browse/SPARK-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14270635#comment-14270635 ]
Gankun Luo commented on SPARK-5141: ----------------------------------- Resolved > CaseInsensitiveMap throws "java.io.NotSerializableException" > ------------------------------------------------------------ > > Key: SPARK-5141 > URL: https://issues.apache.org/jira/browse/SPARK-5141 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Gankun Luo > Priority: Minor > > The following code throws a > serialization.[https://github.com/luogankun/spark-jdbc|https://github.com/luogankun/spark-jdbc] > {code} > CREATE TEMPORARY TABLE jdbc_table > USING com.luogankun.spark.jdbc > OPTIONS ( > sparksql_table_schema '(TBL_ID int, TBL_NAME string, TBL_TYPE string)', > jdbc_table_name 'TBLS', > jdbc_table_schema '(TBL_ID , TBL_NAME , TBL_TYPE)', > url 'jdbc:mysql://hadoop000:3306/hive', > user 'root', > password 'root' > ); > select TBL_ID,TBL_ID,TBL_TYPE from jdbc_table; > {code} > I get the following stack trace: > {code} > org.apache.spark.SparkException: Task not serializable > at > org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166) > at > org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158) > at org.apache.spark.SparkContext.clean(SparkContext.scala:1448) > at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:616) > at > org.apache.spark.sql.execution.Project.execute(basicOperators.scala:43) > at > org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:81) > at > org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:386) > at > org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57) > at > org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) > at > org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211) > at > org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:365) > at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) > at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) > Caused by: java.io.NotSerializableException: > org.apache.spark.sql.sources.CaseInsensitiveMap > at > java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183) > at > java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547) > ...... > at > org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42) > at > org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73) > at > org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org