[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13109160#comment-13109160
 ] 

Todd Lipcon commented on HDFS-2246:
-----------------------------------

bq. How do you assure relatively bounded number of blocks? Not sure what the 
HBase usecase here is

Typically an HBase RS is responsible for <1TB or so of data. So likely to be 
bounded in the 10s of thousands of blocks on the high end. I'd imagine the 
memory usage for ~10K blocks would be pretty trivial (small number of MB). So, 
the OOME you saw is definitely concerning. Maybe we're better off using a 
bounded cache size instead of soft refs?

> Shortcut a local client reads to a Datanodes files directly
> -----------------------------------------------------------
>
>                 Key: HDFS-2246
>                 URL: https://issues.apache.org/jira/browse/HDFS-2246
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Sanjay Radia
>         Attachments: 0001-HDFS-347.-Local-reads.patch, HDFS-2246.20s.1.patch, 
> HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.patch, 
> localReadShortcut20-security.2patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to