[
https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833481#comment-17833481
]
ASF GitHub Bot commented on HADOOP-19098:
-----------------------------------------
steveloughran commented on PR #6698:
URL: https://github.com/apache/hadoop/pull/6698#issuecomment-2034123440
Two failures in the test
```
[ERROR] testEOFRanges416Handling[Buffer type :
direct](org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead) Time
elapsed: 0.811 s <<< ERROR!
java.io.EOFException: HTTP stream closed before all bytes were read.
Expected 1,024 bytes but only read 0 bytes. Current position 0 (range
[0-66560], length=66,560, reference=null)
at
org.apache.hadoop.fs.s3a.S3AInputStream.readByteArray(S3AInputStream.java:1181)
at
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$populateBuffer$6(S3AInputStream.java:1141)
at
org.apache.hadoop.fs.VectoredReadUtils.readInDirectBuffer(VectoredReadUtils.java:211)
at
org.apache.hadoop.fs.s3a.S3AInputStream.populateBuffer(S3AInputStream.java:1139)
at
org.apache.hadoop.fs.s3a.S3AInputStream.readSingleRange(S3AInputStream.java:1090)
at
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$readVectored$4(S3AInputStream.java:933)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testEOFRanges416Handling[Buffer type :
array](org.apache.hadoop.fs.contract.s3a.ITestS3AContractVectoredRead) Time
elapsed: 1.281 s <<< ERROR!
java.io.EOFException: HTTP stream closed before all bytes were read.
Expected 66,560 bytes but only read 65,536 bytes. Current position 65,536
(range [0-66560], length=66,560, reference=null)
at
org.apache.hadoop.fs.s3a.S3AInputStream.readByteArray(S3AInputStream.java:1181)
at
org.apache.hadoop.fs.s3a.S3AInputStream.populateBuffer(S3AInputStream.java:1148)
at
org.apache.hadoop.fs.s3a.S3AInputStream.readSingleRange(S3AInputStream.java:1090)
at
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$readVectored$4(S3AInputStream.java:933)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
```
> Vector IO: consistent specified rejection of overlapping ranges
> ---------------------------------------------------------------
>
> Key: HADOOP-19098
> URL: https://issues.apache.org/jira/browse/HADOOP-19098
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs, fs/s3
> Affects Versions: 3.3.6
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Related to PARQUET-2171 q: "how do you deal with overlapping ranges?"
> I believe s3a rejects this, but the other impls may not.
> Proposed
> FS spec to say
> * "overlap triggers IllegalArgumentException".
> * special case: 0 byte ranges may be short circuited to return empty buffer
> even without checking file length etc.
> Contract tests to validate this
> (+ common helper code to do this).
> I'll copy the validation stuff into the parquet PR for consistency with older
> releases
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]