dvogelbacher commented on a change in pull request #24677:
[SPARK-27805][PYTHON] Propagate SparkExceptions during toPandas with arrow
enabled
URL: https://github.com/apache/spark/pull/24677#discussion_r288986101
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##########
@@ -3313,20 +3313,34 @@ class Dataset[T] private[sql](
}
}
- val arrowBatchRdd = toArrowBatchRdd(plan)
- sparkSession.sparkContext.runJob(
- arrowBatchRdd,
- (it: Iterator[Array[Byte]]) => it.toArray,
- handlePartitionBatches)
+ var sparkException: Option[SparkException] = None
+ try {
+ val arrowBatchRdd = toArrowBatchRdd(plan)
+ sparkSession.sparkContext.runJob(
+ arrowBatchRdd,
+ (it: Iterator[Array[Byte]]) => it.toArray,
+ handlePartitionBatches)
+ } catch {
+ case e: SparkException =>
+ sparkException = Some(e)
+ }
- // After processing all partitions, end the stream and write batch
order indices
+ // After processing all partitions, end the batch stream
batchWriter.end()
Review comment:
If we put it into a finally block but only catch `SparkException` then that
would be wrong: If a different exception gets thrown then we would go into
`case None`, end the stream as if nothing happened and only get partial,
incorrect data on the python side.
If we want to put this into a finally block then we should catch all
exceptions but I figured I'd do the same as in
https://github.com/apache/spark/pull/24070/files#r279589039
It should be fine as is, if any exception that isn't a `SparkException` gets
thrown then we will never reach this code. Instead the `OutputStream` just gets
closed and we get an `EofError` on the python side (like we do right now for
all Exceptions).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]