[ 
https://issues.apache.org/jira/browse/HDDS-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16674012#comment-16674012
 ] 

Steve Loughran commented on HDDS-791:
-------------------------------------

make sure that 0 byte ranges are supported, even on a 0-byte file. Source of 
pain. S3A client execute ranged gets when a file is opened in fadvise=random, 
or when it detects random IO. 
The benefit comes in being able to seek() fast to a new offset, and for 
positioned readable to explicitly set the range to read. In both cases, the 
HTTP connection can be reused: no need to either read to the end of the file (a 
real perf killer in production which doesn't show up in small-file tests), or 
two abort the TCP stream and renegotiate a new one

> Support Range header for ozone s3 object download
> -------------------------------------------------
>
>                 Key: HDDS-791
>                 URL: https://issues.apache.org/jira/browse/HDDS-791
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: S3
>            Reporter: Elek, Marton
>            Priority: Major
>
> Using s3 rest api smaller chunks of an object could be uploaded with using 
> Range headers:
> For example:
> {code}
> GET /example-object HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> x-amz-date: Fri, 28 Jan 2011 21:32:02 GMT
> Range: bytes=0-9
> Authorization: AWS AKIAIOSFODNN7EXAMPLE:Yxg83MZaEgh3OZ3l0rLo5RTX11o=
> Sample Response with Specified Range of the Object Bytes
> {code}
> Can be implemented with using the seek method on OzoneInputStream.
> The Range header  support is one of the missing piece for fully support s3a 
> interface.
> References:
> Range header spec:
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
> Aws s3 doc:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to