[ 
https://issues.apache.org/jira/browse/HBASE-723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610879#action_12610879
 ] 

Andrew Purtell commented on HBASE-723:
--------------------------------------

Also while on the subject, there is no means for other parts of the client API 
to invalidate cache entries on demand. While writing a testcase for HBASE-62, I 
observed that table metadata is cached as part of the cached region information 
and this is not invalidated by disable/enable table, so to get up to date 
metadata the client has to use a scanner over .META. directly using the meta 
visitor. Ideally my client-side support for metadata updates 
(HBaseAdmin.updateTableMeta(byte[] table, HTableDescriptor htd)) could call 
into this cache to invalidate entries related to the table, so then the next 
HTable.getTableDescriptor() would go to meta to return up to date information. 

> TableServers's cachedRegionLocation doesn't have size limit.
> ------------------------------------------------------------
>
>                 Key: HBASE-723
>                 URL: https://issues.apache.org/jira/browse/HBASE-723
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.1.3
>         Environment: hbase 0.3.0
>            Reporter: Jb Lee
>            Priority: Minor
>
> cachedRegionLocation stores region locations of tables whenever new region is 
> looked up. However, the enties are deleted only when TableServers object is 
> closed or locateRegion is called with false useCache argument. Therefore, it 
> seems to grow without limit and cause out of memory exception. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to