[ 
https://issues.apache.org/jira/browse/HDDS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16707966#comment-16707966
 ] 

Bharat Viswanadham commented on HDDS-894:
-----------------------------------------

Thank You, [~elek] for the fix and detailed explanation of the reason for this 
error.

+1 LGTM.

I have run s3 smoke tests on the s3 server and AWS s3 endpoint. All tests 
passed.

Will commit this shortly.

 

 

> Content-length should be set for ozone s3 ranged download
> ---------------------------------------------------------
>
>                 Key: HDDS-894
>                 URL: https://issues.apache.org/jira/browse/HDDS-894
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: S3
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-894.001.patch
>
>
> Some of the seek related s3a unit tests are failed when using ozone s3g as 
> the destination endpoint.
> For example ITestS3ContractSeek.testRandomSeeks is failing with:
> {code}
> org.apache.hadoop.fs.s3a.AWSClientIOException: read on 
> s3a://buckettest/test/testrandomseeks.bin: com.amazonaws.SdkClientException: 
> Data read has a different length than the expected: dataLength=9411; 
> expectedLength=0; includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0: Data read has a 
> different length than the expected: dataLength=9411; expectedLength=0; 
> includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0
>       at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:189)
>       at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
>       at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>       at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>       at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>       at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
>       at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:446)
>       at java.io.DataInputStream.readFully(DataInputStream.java:195)
>       at java.io.DataInputStream.readFully(DataInputStream.java:169)
>       at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyRead(ContractTestUtils.java:256)
>       at 
> org.apache.hadoop.fs.contract.AbstractContractSeekTest.testRandomSeeks(AbstractContractSeekTest.java:357)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>       at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>       at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>       at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at java.lang.Thread.run(Thread.java:745)
> {code}
> With checking the requests/responses with mitm proxy I found that it works 
> well under a given range length
> But if the response would be bigger than a specific size the response is 
> chunked by the jetty server, which could be the problem. 
> Response for the problematic request:
> {code}
> Request                         Response                        Detail
> Date:                    Mon, 03 Dec 2018 11:27:55 GMT                        
>               
> Cache-Control:           no-cache                                             
>               
> Expires:                 Mon, 03 Dec 2018 11:27:55 GMT                        
>               
> Date:                    Mon, 03 Dec 2018 11:27:55 GMT                        
>               
> Pragma:                  no-cache                                             
>               
> X-Content-Type-Options:  nosniff                                              
>               
> X-FRAME-OPTIONS:         SAMEORIGIN                                           
>               
> X-XSS-Protection:        1; mode=block                                        
>               
> Content-Range:           bytes 208-10239/10240                                
>               
> Accept-Ranges:           bytes                                                
>               
> Content-Type:            application/octet-stream                             
>               
> Last-Modified:           Mon, 03 Dec 2018 11:27:54 GMT                        
>               
> Server:                  Ozone                                                
>               
> x-amz-id-2:              gk2CRdkmri0mc1                                       
>               
> x-amz-request-id:        eb60ee7f-55df-4439-b22a-7d92076f6eee                 
>               
> Transfer-Encoding:       chunked 
> {code}
> As you can see the Content-Length is missing and the Transfer-Enconding is 
> missing.
> Based on [this|https://www.eclipse.org/lists/jetty-users/msg03053.html] 
> comment the solution is to explicit add the Content-Length to the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to