Github user davies commented on the pull request: https://github.com/apache/spark/pull/3003#issuecomment-61179733 The logging looks like this now: ``` 14/10/30 15:17:36 WARN Executor: Finished task 1.0 in stage 0.0 (TID 1). Result is larger than maxResultSize (4.5 MB > 1024.0 KB), drop it 14/10/30 15:17:36 ERROR TaskSetManager: Total number of bytes of serialized results is bigger than maxResultSize: 4.5 MB > 1024.0 KB 14/10/30 15:17:36 INFO TaskSchedulerImpl: Cancelling stage 0 org.apache.spark.SparkException: Job aborted due to stage failure: Total size of results is bigger than maxResultSize, please increase spark.driver.maxResultSize or avoid collect() if possible at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1191) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1180) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1179) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) ``` Does it work for you?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org