[ 
https://issues.apache.org/jira/browse/PHOENIX-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301255#comment-15301255
 ] 

Nick Dimiduk commented on PHOENIX-2939:
---------------------------------------

If we want to continue the practice of caching the entire stats table, we 
should dynamically resize the metadata cache so that all the stats can fit into 
the cache.

> MetaCache is easily thrashed with default settings
> --------------------------------------------------
>
>                 Key: PHOENIX-2939
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2939
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Nick Dimiduk
>
> With default settings for {{phoenix.coprocessor.maxMetaDataCacheSize}} (20mb) 
> and {{phoenix.stats.guidepost.width}} (100mb * 3), even a relatively small 
> amount of data (ie 10TB, 2000 regions) will easily produce more stats data 
> than fits in the cache. This quickly leads us into a situation where we 
> thrash the cache and spend an inordinate amount of time (re)scanning 
> SYSTEM.STATS table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to