Todd Lipcon created HDFS-4418:
---------------------------------

             Summary: HDFS-347: increase default FileInputStreamCache size
                 Key: HDFS-4418
                 URL: https://issues.apache.org/jira/browse/HDFS-4418
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: Todd Lipcon
            Assignee: Todd Lipcon


The FileInputStreamCache currently defaults to holding only 10 input stream 
pairs (corresponding to 10 blocks). In many HBase workloads, the region server 
will be issuing random reads against a local file which is 2-4GB in size or 
even larger (hence 20+ blocks).

Given that the memory usage for caching these input streams is low, and 
applications like HBase tend to already increase their ulimit -n substantially 
(eg up to 32,000), I think we should raise the default cache size to 50 or 
more. In the rare case that someone has an application which uses local reads 
with hundreds of open blocks and can't feasibly raise their ulimit -n, they can 
lower the limit appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to