[ https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16743040#comment-16743040 ]
Steve Loughran commented on HADOOP-15961: ----------------------------------------- Yetus only does the unit tests, not the real {{mvn verify}} tests with real credentials. Submitters are required to have done that themselves, and the way we check everyone's honesty is asking them to list the specific S3 instance they ran against, e.g. "US west", "ireland", "my own S3 service" see: https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/testing.html#Policy_for_submitting_patches_which_affect_the_hadoop-aws_module. > S3A committers: make sure there's regular progress() calls > ---------------------------------------------------------- > > Key: HADOOP-15961 > URL: https://issues.apache.org/jira/browse/HADOOP-15961 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Reporter: Steve Loughran > Assignee: lqjacklee > Priority: Minor > Attachments: HADOOP-15961-001.patch > > > MAPREDUCE-7164 highlights how inside job/task commit more context.progress() > callbacks are needed, just for HDFS. > the S3A committers should be reviewed similarly. > At a glance: > StagingCommitter.commitTaskInternal() is at risk if a task write upload > enough data to the localfs that the upload takes longer than the timeout. > it should call progress it every single file commits, or better: modify > {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks > after every part upload. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org