[ 
https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13671701#comment-13671701
 ] 

Jean-Daniel Cryans commented on HBASE-8635:
-------------------------------------------

I agree that prefetching should be checked in checkForClusterFreeMemoryLimit() 
but one thing that worries me is all those users that already use 80% of their 
heap with BC and MemStores that won't be able to restart HBase once they 
upgrade because they'd now be at 90% with pre-fetching.

Next thing that worries me is that it's not clear to me that prefetching needs 
to scale with the amount of memory given to HBase. 10% of 1GB is ~100MB, of 
10GB it's 1GB and of 24GB it's 2.4GB... that's a lot! Realistically, the main 
case I can think of that would do a lot of concurrent long scans is TIF. Let's 
say your TTs have 12 map slots, so you have as many scanners, and each of them 
read batches of 10MB (which is a lot), then you only need 12*10MB = 120MB for 
prefetching. I also voiced that concern in HBASE-8420 that even 256MB seems too 
big.
                
> Define prefetcher.resultsize.max as percentage
> ----------------------------------------------
>
>                 Key: HBASE-8635
>                 URL: https://issues.apache.org/jira/browse/HBASE-8635
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Jimmy Xiang
>            Priority: Minor
>         Attachments: trunk-8635.patch
>
>
> Currently "hbase.hregionserver.prefetcher.resultsize.max" defines global 
> limit for prefetching.
> The default value is 256MB.
> It would be more flexible to define this measure as a percentage of the heap.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to