[ 
https://issues.apache.org/jira/browse/HBASE-1127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664581#action_12664581
 ] 

stack commented on HBASE-1127:
------------------------------

Thanks for the votes lads.  I'm with you now after my testing.  Either the GC 
is a laggard when it comes to adding released references to the referencequeue 
-- probably not -- or the window is narrow and we're just not servicing the 
queue promptly enough.  Looking at code, i'm not sure how we can be reactive 
enough, not without paying high cost in monitoring code.

Time to start up a smart cache effort.  Erik Holstad has made a start already I 
believer.  Erik Holstad and Jon Gray experiments with Soft References had it 
that it wasn't LRU any way (no guarantees but suggested eviction praxis in 
javadoc).

Meantime, undoing blockcache on by default in all but catalog tables.  Will be 
back after a bit of testing.

> OOME running randomRead PE
> --------------------------
>
>                 Key: HBASE-1127
>                 URL: https://issues.apache.org/jira/browse/HBASE-1127
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.19.0
>
>
> Blockcache is misbehaving on TRUNK.  Something is broke.  We OOME about 20% 
> into the randomRead test.  Looking at heap, its all soft references.  
> Instrumenting the referencequeue, we're never clearing full gc'ing.  
> Something is off.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to