Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/6414#issuecomment-105655718
  
    Nice, it looks like the test failure reproduces #6400:
    
    ```
    Lost task 0.0 in stage 405.0 (TID 974, localhost): 
java.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.spark.sql.types.UTF8String  at 
org.apache.spark.sql.execution.SparkSqlSerializer2$$anonfun$createSerializationFunction$1.apply(SparkSqlSerializer2.scala:319)
  at 
org.apache.spark.sql.execution.SparkSqlSerializer2$$anonfun$createSerializationFunction$1.apply(SparkSqlSerializer2.scala:212)
  at 
org.apache.spark.sql.execution.Serializer2SerializationStream.writeKey(SparkSqlSerializer2.scala:65)
  at 
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:206)
  at 
org.apache.spark.util.collection.WritablePartitionedIterator$$anon$3.writeNext(WritablePartitionedPairCollection.scala:104)
  at 
org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:375)
  at 
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:208)
  at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWr
 iter.scala:62)  at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)  at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)  at 
org.apache.spark.scheduler.Task.run(Task.scala:70)  at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
 at java.lang.Thread.run(Thread.java:745)  Driver stacktrace: 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 405.0 failed 1 times, most recent failure: Lost task 0.0 in stage 405.0 
(TID 974, localhost): java.lang.ClassCastException: java.lang.String cannot be 
cast to org.apache.spark.sql.types.UTF8String  at 
org.apache.spark.sql.execution.SparkSqlSerializer2$$anonfun$createSerializationFunction$1.apply(SparkSqlSerializer2.scala:319)
  at org.apache.spark.sql.execution.SparkSqlSerializer2
 $$anonfun$createSerializationFunction$1.apply(SparkSqlSerializer2.scala:212)  
at 
org.apache.spark.sql.execution.Serializer2SerializationStream.writeKey(SparkSqlSerializer2.scala:65)
  at 
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:206)
  at 
org.apache.spark.util.collection.WritablePartitionedIterator$$anon$3.writeNext(WritablePartitionedPairCollection.scala:104)
  at 
org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:375)
  at 
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:208)
  at 
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) 
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)  
at org.apache.spark.scheduler.Task.run(Task.scala:70)  at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)  at 
java.util.concurrent.ThreadPoolExecu
 tor.runWorker(ThreadPoolExecutor.java:1145)  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
 at java.lang.Thread.run(Thread.java:745)  Driver stacktrace:  at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
  at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)  
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)  at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
  at scala.Option.foreach(Option
 .scala:236)  at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
  at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
  at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)            
    ```
    
    How do you want to handle merging this and #6400?  I think that #6400 needs 
to go first so that we don't commit broken code.  Should I rebase my branch to 
include this commit and remove the regression test that I added?  Or should we 
leave both tests in place?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to