[ 
https://issues.apache.org/jira/browse/HADOOP-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17300847#comment-17300847
 ] 

Steve Loughran commented on HADOOP-17584:
-----------------------------------------

ooh. not good. Fancy submitting a patch against hadoop-trunk? We can get it 
into hadoop-3.3.1 if it goes in this week.


Please look at the hadoop-aws test policy when you submit the PR? We can help 
with testing the new patch in terms of extending the relevant test commit 
protocol suite and doing a test run ourselves, but submitter needs to be set up 
to run those tests too. thanks

Something else to think about: should job setup list/purge pending files? not 
sufficient as if the .pendingset files are still there, then job commit will 
list them and fail during commit as the uploads aren't there any more...your 
suggestion is the only one which works


> s3a committer may commit more data
> ----------------------------------
>
>                 Key: HADOOP-17584
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17584
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: yinan zhan
>            Priority: Major
>
> s3a magic committer isRecoverySupported() is false, so will restart all task 
> after application master restart for am jvm crashed, leaving pendingset in 
> magic path not to clear. pendingset name format is jobAttemptPath + 
> taskAttemptID.getTaskID() + ".pendingset", and jobAttemptPath is actually 
> jobIdPath not JobAttemptIdPath in s3a magic committer. These pendingset files 
> are overwritted by new task commit.
> But if in new am attempt, a speculative task overcomes origin task, so 
> pendingset file in last attempt may be hold for job commit, the data for 
> commit is wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to