Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/21342#discussion_r189634704
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala
---
@@ -111,12 +112,18 @@ case class BroadcastExchangeExec(
SQLMetrics.postDriverMetricUpdates(sparkContext, executionId,
metrics.values.toSeq)
broadcasted
} catch {
+ // SPARK-24294: To bypass scala bug:
https://github.com/scala/bug/issues/9554, we throw
+ // SparkFatalException, which is a subclass of Exception.
ThreadUtils.awaitResult
+ // will catch this exception and re-throw the wrapped fatal
throwable.
case oe: OutOfMemoryError =>
- throw new OutOfMemoryError(s"Not enough memory to build and
broadcast the table to " +
+ throw new SparkFatalException(
+ new OutOfMemoryError(s"Not enough memory to build and
broadcast the table to " +
--- End diff --
Just curious: Can we perform object operations (allocate
`OutOfMemoryError`, allocate and concatenate `String`s) when we caught `
OutOfMemoryError`?
I think that we have space since we failed to allocate a large object.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]