[ 
https://issues.apache.org/jira/browse/HBASE-14307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712608#comment-14712608
 ] 

Shradha Revankar commented on HBASE-14307:
------------------------------------------

Yes, but it is not guaranteed that it will even read as much as the 'size' 
using the positional read api, shouldn't there be a loop to read until at least 
'size'. 

We tried running hbase with WebhdfsFilesystem (with server implementation that 
sets http header for Transfer-encoding as chunked encoding, there is no 
content-length present), the positional read api reads only the first chunk 
which is far less than the size. Unless there is a loop, the rest of the bytes 
are not read. We ended up getting errors like this :

Caused by: java.io.IOException: Positional read of 16425 bytes failed at offset 
4132767 (returned 26)
              at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1322)


> Incorrect use of positional read api in HFileBlock
> --------------------------------------------------
>
>                 Key: HBASE-14307
>                 URL: https://issues.apache.org/jira/browse/HBASE-14307
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Shradha Revankar
>            Priority: Minor
>
> Considering that {{read()}} is not guaranteed to read all bytes, 
> I'm interested to understand this particular piece of code and why is partial 
> read treated as an error :
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java#L1446-L1450
> Particularly, if hbase were to use a different filesystem, say 
> WebhdfsFileSystem, this would not work, please also see 
> https://issues.apache.org/jira/browse/HDFS-8943 for discussion around this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to