[ 
https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386427#comment-14386427
 ] 

Thomas Demoor commented on HADOOP-11753:
----------------------------------------

Not sure we should close this. The change you propose seems harmless (the 
result is the same, the entire object is returned) and it makes the code more 
readable (one no longer needs to know the HTTP spec) so I'm OK with it (+ it 
makes your life easier). [[email protected]] what do you think?

Your other change (HADOOP-11742) is higher risk, we want to be really sure you 
don't break other backends and AWS is the standard against which we can all run 
tests, so some more justification is required.

> TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range 
> header
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-11753
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11753
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.0.0, 2.7.0
>            Reporter: Takenori Sato
>            Assignee: Takenori Sato
>         Attachments: HADOOP-11753-branch-2.7.001.patch
>
>
> _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows.
> {code}
> testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen)
>   Time elapsed: 3.312 sec  <<< ERROR!
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
> Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: 
> InvalidRange, AWS Error Message: The requested range cannot be satisfied.
>       at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
>       at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
>       at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1111)
>       at 
> org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91)
>       at 
> org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62)
>       at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127)
>       at java.io.FilterInputStream.read(FilterInputStream.java:83)
>       at 
> org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>       at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> This is because the header is wrong when calling _S3AInputStream#read_ after 
> _S3AInputStream#open_.
> {code}
> Range: bytes=0--1
> * from 0 to -1
> {code}
> Tested on the latest branch-2.7.
> {quote}
> $ git log
> commit d286673c602524af08935ea132c8afd181b6e2e4
> Author: Jitendra Pandey <[email protected]>
> Date:   Tue Mar 24 16:17:06 2015 -0700
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to