[jira] [Commented] (HDDS-894) Content-length should be set for ozone s3 ranged download

2018-12-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707982#comment-16707982
 ] 

Hudson commented on HDDS-894:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15553 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15553/])
HDDS-894. Content-length should be set for ozone s3 ranged download. (bharat: 
rev de4255509adbd15fbbf9ade245ae6bb6db8b36b7)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectGet.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java


> Content-length should be set for ozone s3 ranged download
> -
>
> Key: HDDS-894
> URL: https://issues.apache.org/jira/browse/HDDS-894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-894.001.patch
>
>
> Some of the seek related s3a unit tests are failed when using ozone s3g as 
> the destination endpoint.
> For example ITestS3ContractSeek.testRandomSeeks is failing with:
> {code}
> org.apache.hadoop.fs.s3a.AWSClientIOException: read on 
> s3a://buckettest/test/testrandomseeks.bin: com.amazonaws.SdkClientException: 
> Data read has a different length than the expected: dataLength=9411; 
> expectedLength=0; includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0: Data read has a 
> different length than the expected: dataLength=9411; expectedLength=0; 
> includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0
>   at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:189)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:446)
>   at java.io.DataInputStream.readFully(DataInputStream.java:195)
>   at java.io.DataInputStream.readFully(DataInputStream.java:169)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyRead(ContractTestUtils.java:256)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractSeekTest.testRandomSeeks(AbstractContractSeekTest.java:357)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> With checking the requests/responses with mitm proxy I found that it works 
> well under a given range length
> But if the response would be bigger than a specific size the response is 
> chunked by the jetty server, which could be the problem. 
> Response for the problematic request:
> {code}
> Request ResponseDetail
> Date:Mon, 03 Dec 2018 11:27:55 GMT
>   
> Cache-Control:   no-cache 
>   
> Expires: Mon, 03 Dec 2018 11:27:55 GMT
>   
> Date:Mon, 03 Dec 2018 11:27:55 GMT

[jira] [Commented] (HDDS-894) Content-length should be set for ozone s3 ranged download

2018-12-03 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707966#comment-16707966
 ] 

Bharat Viswanadham commented on HDDS-894:
-

Thank You, [~elek] for the fix and detailed explanation of the reason for this 
error.

+1 LGTM.

I have run s3 smoke tests on the s3 server and AWS s3 endpoint. All tests 
passed.

Will commit this shortly.

 

 

> Content-length should be set for ozone s3 ranged download
> -
>
> Key: HDDS-894
> URL: https://issues.apache.org/jira/browse/HDDS-894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-894.001.patch
>
>
> Some of the seek related s3a unit tests are failed when using ozone s3g as 
> the destination endpoint.
> For example ITestS3ContractSeek.testRandomSeeks is failing with:
> {code}
> org.apache.hadoop.fs.s3a.AWSClientIOException: read on 
> s3a://buckettest/test/testrandomseeks.bin: com.amazonaws.SdkClientException: 
> Data read has a different length than the expected: dataLength=9411; 
> expectedLength=0; includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0: Data read has a 
> different length than the expected: dataLength=9411; expectedLength=0; 
> includeSkipped=true; in.getClass()=class 
> com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; 
> resetSinceLastMarked=false; markCount=0; resetCount=0
>   at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:189)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:446)
>   at java.io.DataInputStream.readFully(DataInputStream.java:195)
>   at java.io.DataInputStream.readFully(DataInputStream.java:169)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyRead(ContractTestUtils.java:256)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractSeekTest.testRandomSeeks(AbstractContractSeekTest.java:357)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> With checking the requests/responses with mitm proxy I found that it works 
> well under a given range length
> But if the response would be bigger than a specific size the response is 
> chunked by the jetty server, which could be the problem. 
> Response for the problematic request:
> {code}
> Request ResponseDetail
> Date:Mon, 03 Dec 2018 11:27:55 GMT
>   
> Cache-Control:   no-cache 
>   
> Expires: Mon, 03 Dec 2018 11:27:55 GMT
>   
> Date:Mon, 03 Dec 2018 11:27:55 GMT
>   
> Pragma:  no-cache 
>   
> X-Content-Type-Options:  nosniff  
>   
> X-FRAME-OPTIONS: 

[jira] [Commented] (HDDS-894) Content-length should be set for ozone s3 ranged download

2018-12-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707403#comment-16707403
 ] 

Hadoop QA commented on HDDS-894:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 31s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 41s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.container.common.helpers.TestBlockData |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.TestBlockDeletingService |
|   | hadoop.ozone.scm.TestContainerSmallFile |
|   | hadoop.ozone.container.keyvalue.TestBlockManagerImpl |
|   | hadoop.ozone.container.common.impl.TestHddsDispatcher |
|   | hadoop.ozone.container.keyvalue.TestKeyValueBlockIterator |
|   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-894 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950400/HDDS-894.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux d732012db90e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 3044b78 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1863/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1863/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1863/testReport/ |
| Max. process+thread count | 1375 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1863/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Content-length should be