steveloughran commented on a change in pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#discussion_r717567393
##########
File path:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -743,11 +775,24 @@ protected void verifyBucketExists()
*/
@Retries.RetryTranslated
protected void verifyBucketExistsV2()
- throws UnknownStoreException, IOException {
+ throws UnknownStoreException, IOException {
if (!invoker.retry("doesBucketExistV2", bucket, true,
trackDurationOfOperation(getDurationTrackerFactory(),
STORE_EXISTS_PROBE.getSymbol(),
- () -> s3.doesBucketExistV2(bucket)))) {
+ () -> {
+ // Bug in SDK always returns `true` for AccessPoint ARNs with
`doesBucketExistV2()`
+ // expanding implementation to use ARNs and buckets correctly
+ try {
+ s3.getBucketAcl(bucket);
+ } catch (AmazonServiceException ex) {
+ int statusCode = ex.getStatusCode();
+ if (statusCode == 404 || (statusCode == 403 && accessPoint !=
null)) {
Review comment:
Starting to think we should use constants here, just to track down where
they come from. I see we don't do that elsewhere
(S3AUtils.translateException(), but that doesn't mean we shouldn't start)
Could you add constants here for HTTP_RESPONSE_404 & 403 in
InternalConstants & refer here. Then we could retrofit and extend elsewhere
##########
File path:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -1167,7 +1216,10 @@ public String getBucketLocation(String bucketName)
throws IOException {
final String region = trackDurationAndSpan(
STORE_EXISTS_PROBE, bucketName, null, () ->
invoker.retry("getBucketLocation()", bucketName, true, () ->
- s3.getBucketLocation(bucketName)));
+ // If accessPoint then region is known from Arn
+ accessPoint != null
Review comment:
Should we pull this up to L1216 & we can skip the entire overhead of
duration tracking, retry etc.
Currently it's overkill to wrap, but it will add it to the iostats, so maybe
it's best to leave as is
##########
File path:
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##########
@@ -2570,6 +2614,11 @@ protected S3ListResult continueListObjects(S3ListRequest
request,
OBJECT_CONTINUE_LIST_REQUEST,
() -> {
if (useListV1) {
+ if (accessPoint != null) {
+ // AccessPoints are not compatible with V1List
+ throw new InvalidRequestException("ListV1 is not supported
by AccessPoints");
Review comment:
I see you've reverted this & letting the SDK fail it. worksforme
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]