Repository: spark
Updated Branches:
  refs/heads/master ac7fc3075 -> d0f36bcb1


[SPARK-20633][SQL] FileFormatWriter should not wrap FetchFailedException

## What changes were proposed in this pull request?

Explicitly handle the FetchFailedException in FileFormatWriter, so it does not 
get wrapped.

Note that this is no longer strictly necessary after SPARK-19276, but it 
improves error messages and also will help avoid others stumbling across this 
in the future.

## How was this patch tested?

Existing unit tests.

Closes https://github.com/apache/spark/pull/17893

Author: Liu Shaohui <[email protected]>

Closes #18145 from squito/SPARK-20633.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d0f36bcb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d0f36bcb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d0f36bcb

Branch: refs/heads/master
Commit: d0f36bcb10c3f424e87a6a38def0c0a3b60c03d1
Parents: ac7fc30
Author: Liu Shaohui <[email protected]>
Authored: Wed May 31 10:53:31 2017 -0500
Committer: Imran Rashid <[email protected]>
Committed: Wed May 31 10:53:31 2017 -0500

----------------------------------------------------------------------
 .../spark/sql/execution/datasources/FileFormatWriter.scala      | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/d0f36bcb/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
index afe454f..0daffa9 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
@@ -31,6 +31,7 @@ import org.apache.spark._
 import org.apache.spark.internal.Logging
 import org.apache.spark.internal.io.{FileCommitProtocol, 
SparkHadoopWriterUtils}
 import org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
+import org.apache.spark.shuffle.FetchFailedException
 import org.apache.spark.sql.SparkSession
 import org.apache.spark.sql.catalyst.catalog.{BucketSpec, ExternalCatalogUtils}
 import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
@@ -259,8 +260,10 @@ object FileFormatWriter extends Logging {
         }
       })
     } catch {
+      case e: FetchFailedException =>
+        throw e
       case t: Throwable =>
-        throw new SparkException("Task failed while writing rows", t)
+        throw new SparkException("Task failed while writing rows.", t)
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to