[
https://issues.apache.org/jira/browse/SPARK-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-10221:
------------------------------
Component/s: SQL
Is this not an issue with the Datastax connector though? I'm not clear.
> RowReaderFactory does not work with blobs
> -----------------------------------------
>
> Key: SPARK-10221
> URL: https://issues.apache.org/jira/browse/SPARK-10221
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: Max Schmidt
>
> While using a RowReaderFactory out of the Util API here:
> com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowToTuple(,
> Class<ByteBuffer>) against a cassandra table with a column which is described
> as a ByteBuffer get the following stacktrace:
> {quote}
> 8786 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager
> - Task 0.0 in stage 0.0 (TID 0) had a not serializable result:
> java.nio.HeapByteBuffer
> Serialization stack:
> - object not serializable (class: java.nio.HeapByteBuffer, value:
> java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
> - field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
> - object (class scala.Tuple4,
> (/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25
> 11:00:23 CEST 2015,76.808)); not retrying
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due
> to stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable
> result: java.nio.HeapByteBuffer
> Serialization stack:
> - object not serializable (class: java.nio.HeapByteBuffer, value:
> java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
> - field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
> - object (class scala.Tuple4,
> (/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25
> 11:00:23 CEST 2015,76.808))
> at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {quote}
> Using a kind of wrapper-class following bean conventions, doesn't work either.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]