[ 
https://issues.apache.org/jira/browse/HBASE-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13810774#comment-13810774
 ] 

Dave Latham commented on HBASE-9865:
------------------------------------

There are actually two issues at play here that can each cause heap problems:

1. As Churro notes, each time we read a WALEdit then discard KVs that don't 
have a proper replication scope set.  We then try to measure the heap used by 
that WALEdit and only measure the remaining KVs without accounting for the 
extra ArrayList capacity (which is not currently trimmed).  For this reason we 
may use far more than "replication.source.size.capacity" memory while 
replicating a single batch, even if the WALEdits are all new.  So the computed 
heap usage during a batch is underestimated.

2. As Lars notes, after using the WALEdit entriesArray for one batch we keep 
them around to be reused.  Each one contains an ArrayList with capacity at 
least as large as the largest single WALEdit that occurred in any batch.  So 
HLogs that have high variance in the size of the batches written will cause 
each WALEdit to grow large over time.  (We saw a region server go out of memory 
today using 4 GB of memory for this).  So the heap usage of the replication 
source as a whole is not checked or compared to 
replication.source.size.capacity.

For #1 I'd propose kvs.trimToSize() in 
ReplicationSource.removeNonReplicableEdits if more than X% of the kvs are 
removed.

For #2 I'd propose throwing away the WALEdit reuse altogether unless there are 
some real numbers about that having significant benefit.  Each time we read it 
we create all the KeyValue instances and their backing arrays which should be 
more than the WALEdits in any case.  Trying to keep the arrays mostly reused 
seems like a recipe for getting them tenured and then making more old gen GC 
work.



> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-9865
>                 URL: https://issues.apache.org/jira/browse/HBASE-9865
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.94.5, 0.95.0
>            Reporter: churro morales
>         Attachments: 9865-sample.txt
>
>
> WALEdit.heapSize() is incorrect in certain replication scenarios which may 
> cause RegionServers to go OOM.
> A little background on this issue.  We noticed that our source replication 
> regionservers would get into gc storms and sometimes even OOM. 
> We noticed a case where it showed that there were around 25k WALEdits to 
> replicate, each one with an ArrayList of KeyValues.  The array list had a 
> capacity of around 90k (using 350KB of heap memory) but had around 6 non null 
> entries.
> When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a 
> WALEdit it removes all kv's that are scoped other than local.  
> But in doing so we don't account for the capacity of the ArrayList when 
> determining heapSize for a WALEdit.  The logic for shipping a batch is 
> whether you have hit a size capacity or number of entries capacity.  
> Therefore if have a WALEdit with 25k entries and suppose all are removed: 
> The size of the arrayList is 0 (we don't even count the collection's heap 
> size currently) but the capacity is ignored.
> This will yield a heapSize() of 0 bytes while in the best case it would be at 
> least 100000 bytes (provided you pass initialCapacity and you have 32 bit 
> JVM) 
> I have some ideas on how to address this problem and want to know everyone's 
> thoughts:
> 1. We use a probabalistic counter such as HyperLogLog and create something 
> like:
>       * class CapacityEstimateArrayList implements ArrayList
>               ** this class overrides all additive methods to update the 
> probabalistic counts
>               ** it includes one additional method called estimateCapacity 
> (we would take estimateCapacity - size() and fill in sizes for all references)
>       * Then we can do something like this in WALEdit.heapSize:
>       
> {code}
>   public long heapSize() {
>     long ret = ClassSize.ARRAYLIST;
>     for (KeyValue kv : kvs) {
>       ret += kv.heapSize();
>     }
>     long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
>     ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
>     if (scopes != null) {
>       ret += ClassSize.TREEMAP;
>       ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
>       // TODO this isn't quite right, need help here
>     }
>     return ret;
>   }   
> {code}
> 2. In ReplicationSource.removeNonReplicableEdits() we know the size of the 
> array originally, and we provide some percentage threshold.  When that 
> threshold is met (50% of the entries have been removed) we can call 
> kvs.trimToSize()
> 3. in the heapSize() method for WALEdit we could use reflection (Please don't 
> shoot me for this) to grab the actual capacity of the list.  Doing something 
> like this:
> {code}
> public int getArrayListCapacity()  {
>     try {
>       Field f = ArrayList.class.getDeclaredField("elementData");
>       f.setAccessible(true);
>       return ((Object[]) f.get(kvs)).length;
>     } catch (Exception e) {
>       log.warn("Exception in trying to get capacity on ArrayList", e);
>       return kvs.size();
>     }
> {code}
> I am partial to (1) using HyperLogLog and creating a 
> CapacityEstimateArrayList, this is reusable throughout the code for other 
> classes that implement HeapSize which contains ArrayLists.  The memory 
> footprint is very small and it is very fast.  The issue is that this is an 
> estimate, although we can configure the precision we most likely always be 
> conservative.  The estimateCapacity will always be less than the 
> actualCapacity, but it will be close. I think that putting the logic in 
> removeNonReplicableEdits will work, but this only solves the heapSize problem 
> in this particular scenario.  Solution 3 is slow and horrible but that gives 
> us the exact answer.
> I would love to hear if anyone else has any other ideas on how to remedy this 
> problem?  I have code for trunk and 0.94 for all 3 ideas and can provide a 
> patch if the community thinks any of these approaches is a viable one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to