[ 
https://issues.apache.org/jira/browse/HADOOP-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18021897#comment-18021897
 ] 

ASF GitHub Bot commented on HADOOP-19654:
-----------------------------------------

ahmarsuhail commented on PR #7882:
URL: https://github.com/apache/hadoop/pull/7882#issuecomment-3320014995

   able to reproduce the issue outside of S3A. Basically did what would happen 
when you run a test in S3A:
   
   * a probe for the `test/` directory, and then create the `test/` directory, 
and then do the `headObject()` call. 
   
   The head fails, but if you comment out 
`requestChecksumCalculation(RequestChecksumCalculation.WHEN_REQUIRED)` it works 
again. 
   
   no idea what's going on. but have shared this local reproduction with SDK 
team. And rules out that it's something in the S3A code. 
   
   
   
   ```
   public class TestClass {
   
       S3Client s3Client;
   
       public TestClass() {
           this.s3Client = S3Client.builder().region(Region.US_EAST_1)
                   .addPlugin(LegacyMd5Plugin.create())
                   
.requestChecksumCalculation(RequestChecksumCalculation.WHEN_REQUIRED)
                   
.responseChecksumValidation(ResponseChecksumValidation.WHEN_SUPPORTED)
                   .overrideConfiguration(o -> o.retryStrategy(b -> 
b.maxAttempts(1)))
                   .build();
       }
   
   
       public void testS3Express(String bucket, String key) {
           s3Client.listObjectsV2(ListObjectsV2Request.builder()
                   .bucket("<>")
                   .maxKeys(2)
                   .prefix("test/")
                   .build());
   
   
           try {
               s3Client.headObject(HeadObjectRequest.builder().bucket("<>")
                       .key("test")
                       .build());
           } catch (Exception e) {
               System.out.println("Exception thrown: " + e.getMessage());
           }
   
           s3Client.putObject(PutObjectRequest
                   .builder()
                   .bucket("<>")
                   .key("test/").build(), RequestBody.empty());
   
           s3Client.headObject(HeadObjectRequest.builder().bucket("<>")
                   .key("<>")
                   .build());
       }
   
   ```




> Upgrade AWS SDK to 2.33.x
> -------------------------
>
>                 Key: HADOOP-19654
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19654
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: build, fs/s3
>    Affects Versions: 3.5.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>
> Upgrade to a recent version of 2.33.x or later while off the critical path of 
> things.
> HADOOP-19485 froze the sdk at a version which worked with third party stores. 
> Apparently the new version works; early tests show that Bulk Delete calls 
> with third party stores complain about lack of md5 headers, so some tuning is 
> clearly going to be needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to