[
https://issues.apache.org/jira/browse/HADOOP-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16112613#comment-16112613
]
Steve Loughran commented on HADOOP-13134:
-----------------------------------------
WORKAROUND: tell the job committer to ignore failures in cleanup
As discussed in [Spark Cloud
Integration|https://github.com/apache/spark/blob/master/docs/cloud-integration.md],
you can downgrade failures during cleanup to warnings. I recommend this
against object stores for a slightly more robust commit, given that directory
delete is a more complex/brittle operation.& more prone to failures
{code}
spark.hadoop.mapreduce.fileoutputcommitter.cleanup-failures.ignored true
{code}
> WASB's file delete still throwing Blob not found exception
> ----------------------------------------------------------
>
> Key: HADOOP-13134
> URL: https://issues.apache.org/jira/browse/HADOOP-13134
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 2.7.1
> Reporter: Lin Chan
> Assignee: Dushyanth
>
> WASB is still throwing blob not found exception as shown in the following
> stack. Need to catch that and convert to Boolean return code in WASB delete.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]