[ 
https://issues.apache.org/jira/browse/HBASE-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Latham updated HBASE-8806:
-------------------------------

    Attachment: hbase-8806-0.94-v6-microbenchmark
                hbase-8806-0.94-v6-microbenchmark-no-dupe-rows

[~lhofhansl], as requested additional benchmark runs.

I've attached a couple additional runs of the benchmark against the current tip 
of 0.94 and with the v6 patch applied.

In each you can see the first run was burnining in the VM but afterward the 
variation in results gets much smaller.

In particular with no duplicate row keys, you can see the patch takes on 
average 31-32ms to apply 25k puts in each mini batch.  Without the patch it 
seems to average 28-29ms.  I'm comfortable with that difference given the 
speedup in all other cases but I will run the numbers more carefully also for 
the reentrant patch at HBSE-8877.
                
> Row locks are acquired repeatedly in HRegion.doMiniBatchMutation for 
> duplicate rows.
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-8806
>                 URL: https://issues.apache.org/jira/browse/HBASE-8806
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>    Affects Versions: 0.94.5
>            Reporter: rahul gidwani
>            Priority: Critical
>             Fix For: 0.98.0, 0.95.2, 0.94.10
>
>         Attachments: 8806-0.94-v4.txt, 8806-0.94-v5.txt, 8806-0.94-v6.txt, 
> HBASE-8806-0.94.10.patch, HBASE-8806-0.94.10-v2.patch, 
> HBASE-8806-0.94.10-v3.patch, 
> hbase-8806-0.94-v6-microbenchmark-no-dupe-rows.txt, 
> hbase-8806-0.94-v6-microbenchmark.txt, HBASE-8806.patch, 
> HBASE-8806-threadBasedRowLocks.patch, 
> HBASE-8806-threadBasedRowLocks-v2.patch, row_lock_perf_results.txt
>
>
> If we already have the lock in the doMiniBatchMutation we don't need to 
> re-acquire it. The solution would be to keep a cache of the rowKeys already 
> locked for a miniBatchMutation and If we already have the 
> rowKey in the cache, we don't repeatedly try and acquire the lock.  A fix to 
> this problem would be to keep a set of rows we already locked and not try to 
> acquire the lock for these rows.  
> We have tested this fix in our production environment and has improved 
> replication performance quite a bit.  We saw a replication batch go from 3+ 
> minutes to less than 10 seconds for batches with duplicate row keys.
> {code}
> static final int ACQUIRE_LOCK_COUNT = 0;
>   @Test
>   public void testRedundantRowKeys() throws Exception {
>     final int batchSize = 100000;
>     
>     String tableName = getClass().getSimpleName();
>     Configuration conf = HBaseConfiguration.create();
>     conf.setClass(HConstants.REGION_IMPL, MockHRegion.class, HeapSize.class);
>     MockHRegion region = (MockHRegion) 
> TestHRegion.initHRegion(Bytes.toBytes(tableName), tableName, conf, 
> Bytes.toBytes("a"));
>     List<Pair<Mutation, Integer>> someBatch = Lists.newArrayList();
>     int i = 0;
>     while (i < batchSize) {
>       if (i % 2 == 0) {
>         someBatch.add(new Pair<Mutation, Integer>(new Put(Bytes.toBytes(0)), 
> null));
>       } else {
>         someBatch.add(new Pair<Mutation, Integer>(new Put(Bytes.toBytes(1)), 
> null));
>       }
>       i++;
>     }
>     long startTime = System.currentTimeMillis();
>     region.batchMutate(someBatch.toArray(new Pair[0]));
>     long endTime = System.currentTimeMillis();
>     long duration = endTime - startTime;
>     System.out.println("duration: " + duration + " ms");
>     assertEquals(2, ACQUIRE_LOCK_COUNT);
>   }
>   @Override
>   public Integer getLock(Integer lockid, byte[] row, boolean waitForLock) 
> throws IOException {
>     ACQUIRE_LOCK_COUNT++;
>     return super.getLock(lockid, row, waitForLock);
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to