[ https://issues.apache.org/jira/browse/HIVE-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14085821#comment-14085821 ]
Brock Noland commented on HIVE-7540: ------------------------------------ Thanks Sandy. For the specific RangePartitioner case, I think we could use writeObject/readObject which are [optional methods|http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html] Serializable's can implement. If the number of cases where we need to serialize a writable via java serialization is small, then providing point solutions using writeObject/readObject in RangePartitioner might be a reasonable fix. If anyone has a feeling for the number of times we will end up hitting this, please speak up. In the absence of more information, I would suggest we work around this issue on the Hive side and then once we've gathered more information (how many times we need to serialize a writable in a closure, performance impact, etc) we can decide on how to proceed. > NotSerializableException encountered when using sortByKey transformation > ------------------------------------------------------------------------ > > Key: HIVE-7540 > URL: https://issues.apache.org/jira/browse/HIVE-7540 > Project: Hive > Issue Type: Bug > Components: Spark > Environment: Spark-1.0.1 > Reporter: Rui Li > > This exception is thrown when sortByKey is used as the shuffle transformation > between MapWork and ReduceWork: > {quote} > org.apache.spark.SparkException: Job aborted due to stage failure: Task not > serializable: java.io.NotSerializableException: > org.apache.hadoop.io.BytesWritable > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1049) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1033) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1031) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1031) > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:772) > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:715) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:719) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage$4.apply(DAGScheduler.scala:718) > at scala.collection.immutable.List.foreach(List.scala:318) > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:718) > at > org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:699) > … > {quote} > The root cause is that the RangePartitioner used by sortByKey contains > rangeBounds: Array[BytesWritable], which is considered not serializable in > spark. > A workaround to this issue is to set the number of partitions to 1 when > calling sortByKey, in which case the rangeBounds will be just an empty array. > NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)