steveloughran commented on a change in pull request #3534:
URL: https://github.com/apache/hadoop/pull/3534#discussion_r783920317
##########
File path:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -3821,23 +3481,14 @@ S3AFileStatus s3GetFileStatus(final Path path,
// by the time of listing so that the response includes some
// which have not.
- int listSize;
- if (tombstones == null) {
- // no tombstones so look for a marker and at least one child.
- listSize = 2;
- } else {
- // build a listing > tombstones. If the caller has many thousands
- // of tombstones this won't work properly, which is why pruning
- // of expired tombstones matters.
- listSize = Math.min(2 + tombstones.size(), Math.max(2, maxKeys));
- }
+ final int listSize = 2;
S3ListRequest request = createListObjectsRequest(dirKey, "/",
listSize);
// execute the request
S3ListResult listResult = listObjects(request,
getDurationTrackerFactory());
- if (listResult.hasPrefixesOrObjects(contextAccessors, tombstones)) {
+ if (listResult.hasPrefixesOrObjects(contextAccessors, null)) {
Review comment:
well spotted!
i've just gone through looking for all refs to *tombstone*
we can cut all the list reconciliation out, but also various test bits set
up to generate unique paths and so avoid tombstone problems.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]