[jira] [Assigned] (SPARK-24379) BroadcastExchangeExec should catch SparkOutOfMemory and re-throw SparkFatalException, which wraps SparkOutOfMemory inside.
[ https://issues.apache.org/jira/browse/SPARK-24379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-24379: Assignee: (was: Apache Spark) > BroadcastExchangeExec should catch SparkOutOfMemory and re-throw > SparkFatalException, which wraps SparkOutOfMemory inside. > -- > > Key: SPARK-24379 > URL: https://issues.apache.org/jira/browse/SPARK-24379 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.3.0 >Reporter: jin xing >Priority: Major > > After SPARK-22827, Spark won't fails the entire executor but only fails the > task suffering SparkOutOfMemoryError. In current BroadcastExchangeExec, it > try-catch OutOfMemoryError. Think about below scenario: > # SparkOutOfMemoryError(subclass of OutOfMemoryError) is thrown in > scala.concurrent.Future; > # SparkOutOfMemoryError is caught and a OutOfMemoryError is wrapped in > SparkFatalException and re-thrown; > # ThreadUtils.awaitResult catches SparkFatalException and a OutOfMemoryError > is thrown; > # The OutOfMemoryError will go to > SparkUncaughtExceptionHandler.uncaughtException and Executor fails. > So it make more sense to catch SparkOutOfMemory and re-throw > SparkFatalException, which wraps SparkOutOfMemory inside. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-24379) BroadcastExchangeExec should catch SparkOutOfMemory and re-throw SparkFatalException, which wraps SparkOutOfMemory inside.
[ https://issues.apache.org/jira/browse/SPARK-24379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-24379: Assignee: Apache Spark > BroadcastExchangeExec should catch SparkOutOfMemory and re-throw > SparkFatalException, which wraps SparkOutOfMemory inside. > -- > > Key: SPARK-24379 > URL: https://issues.apache.org/jira/browse/SPARK-24379 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.3.0 >Reporter: jin xing >Assignee: Apache Spark >Priority: Major > > After SPARK-22827, Spark won't fails the entire executor but only fails the > task suffering SparkOutOfMemoryError. In current BroadcastExchangeExec, it > try-catch OutOfMemoryError. Think about below scenario: > # SparkOutOfMemoryError(subclass of OutOfMemoryError) is thrown in > scala.concurrent.Future; > # SparkOutOfMemoryError is caught and a OutOfMemoryError is wrapped in > SparkFatalException and re-thrown; > # ThreadUtils.awaitResult catches SparkFatalException and a OutOfMemoryError > is thrown; > # The OutOfMemoryError will go to > SparkUncaughtExceptionHandler.uncaughtException and Executor fails. > So it make more sense to catch SparkOutOfMemory and re-throw > SparkFatalException, which wraps SparkOutOfMemory inside. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org