[
https://issues.apache.org/jira/browse/HADOOP-19576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17955588#comment-17955588
]
ASF GitHub Bot commented on HADOOP-19576:
-----------------------------------------
shameersss1 opened a new pull request, #7722:
URL: https://github.com/apache/hadoop/pull/7722
### Description of PR
Pending MPUs are aborted by default for S3 express store. This leads to job
failure for use cases where the directory needs be purged before the final job
commit, Hence disabling the pending MPUs purging for all types of buckets.
### How was this patch tested?
Test with `us-east-1` with S3 express store bucket. The following tests were
failing with and without the change
```
ITestTreewalkProblems.testDistCp:317->lambda$testDistCp$3:318
ITestTreewalkProblems.testDistCpNoIterator:340->lambda$testDistCpNoIterator$4:341
[Exit code of distcp -update -delete -direct
ITestCustomSigner.testCustomSignerAndInitializer
ITestS3AContractAnalyticsStreamVectoredRead.testVectoredReadAfterNormalRead
ITestS3AEndpointRegion.testCentralEndpointAndNullRegionFipsWithCRUD:510 »
AWSUnsupportedFeature
ITestS3AEndpointRegion.testCentralEndpointAndNullRegionWithCRUD:501->assertOpsUsingNewFs:548
» UnknownHost
ITestS3AEndpointRegion.testWithCrossRegionAccess:395 » UnknownHost
getFileStat...
ITestS3AEndpointRegion.testWithOutCrossRegionAccess:374->lambda$testWithOutCrossRegionAccess$2:376
» UnknownHost
ITestConnectionTimeouts.testObjectUploadTimeouts:265 » AWSBadRequest Writing
O...
ITestS3APutIfMatchAndIfNoneMatch.testIfMatchTwoMultipartUploadsRaceConditionOneClosesFirst:551
» AWSS3IO
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchConflictOnMultipartUpload:321->lambda$testIfNoneMatchConflictOnMultipartUpload$2:322->createFileWithFlags:176
» O
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchMultipartUploadWithRaceCondition:349
» AWSS3IO
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchTwoConcurrentMultipartUploads:372
» AWSS3
```
### For code changes:
- [x] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [x] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> Insert Overwrite Jobs With MagicCommitter Fails On S3 Express Storage
> ---------------------------------------------------------------------
>
> Key: HADOOP-19576
> URL: https://issues.apache.org/jira/browse/HADOOP-19576
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Syed Shameerur Rahman
> Priority: Major
>
> Query engines which uses Magic Committer to overwrite a directory would
> ideally upload the MPUs (not complete) and then delete the contents of the
> directory before committing the MPU.
>
> For S3 express storage, The directory purge operation is enabled by default.
> Refer
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L688]
> for code pointers.
>
> Due to this, the pending MPU uploads are purged and query fails with
> {{NoSuchUpload: The specified multipart upload does not exist. The upload ID
> might be invalid, or the multipart upload might have been aborted or
> completed. }}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]