[
https://issues.apache.org/jira/browse/SPARK-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dirceu Semighini Filho closed SPARK-5797.
-----------------------------------------
Resolution: Not a Problem
Missunderstand the use of a binarytype, should be using the BooleanType
> ClassCastException when using BinaryType field in schemardd
> -----------------------------------------------------------
>
> Key: SPARK-5797
> URL: https://issues.apache.org/jira/browse/SPARK-5797
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.2.0, 1.2.1
> Environment: Linux, Standalone
> Reporter: Dirceu Semighini Filho
> Priority: Minor
> Labels: easyfix, patch
> Original Estimate: 4h
> Remaining Estimate: 4h
>
> Load an dataset with a binaryfield into it.
> Create a schemardd and set the binary field as BinaryType
> Try to cache this rdd
> Result:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
> stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0
> (TID 11, localhost): java.lang.ClassCastException: java.lang.Boolean cannot
> be cast to [B
> at org.apache.spark.sql.columnar.BINARY$.getField(ColumnType.scala:403)
> at org.apache.spark.sql.columnar.BINARY$.getField(ColumnType.scala:398)
> at
> org.apache.spark.sql.columnar.ByteArrayColumnType.actualSize(ColumnType.scala:383)
> at
> org.apache.spark.sql.columnar.BinaryColumnStats.gatherStats(ColumnStats.scala:256)
> at
> org.apache.spark.sql.columnar.NullableColumnBuilder$class.appendFrom(NullableColumnBuilder.scala:56)
> at
> org.apache.spark.sql.columnar.ComplexColumnBuilder.appendFrom(ColumnBuilder.scala:81)
> at
> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:125)
> at
> org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:112)
> at
> org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:249)
> at
> org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:163)
> at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:228)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:56)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Expected:
> Should cache the rdd without any exception
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]