[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16409795#comment-16409795
 ] 

stack edited comment on HBASE-20188 at 3/22/18 4:15 PM:
--------------------------------------------------------

I set in-memory compaction to NONE 
({{hbase.hregion.compacting.memstore.type}}). This helped some w/ the reads but 
we are still far down from hbase1 with workloadb, 95% reads. Looking in log I 
can't tell that in-memory compaction if off. I still see logging.
{code}
...
260444 2018-03-21 22:46:35,800 INFO  
[StoreOpener-e2e92d8b2112d5103bbdba11c6164874-1] 
regionserver.CompactingMemStore: Setting in-memory flush size threshold to 
12.80 MB and immutable segments index to type=CHUNK_MAP
260445 2018-03-21 22:46:35,800 DEBUG 
[StoreOpener-e2e92d8b2112d5103bbdba11c6164874-1] regionserver.HStore: Memstore 
type=org.apache.hadoop.hbase.regionserver.CompactingMemStore
260446 2018-03-21 22:46:35,800 INFO  
[StoreOpener-730bdcdd82b4c5ebff04d371061e787f-1] 
regionserver.CompactingMemStore: Setting in-memory flush size threshold to 
12.80 MB and immutable segments index to type=CHUNK_MAP
260447 2018-03-21 22:46:35,801 DEBUG 
[StoreOpener-730bdcdd82b4c5ebff04d371061e787f-1] regionserver.HStore: Memstore 
type=org.apache.hadoop.hbase.regionserver.CompactingMemStore
...
349621 2018-03-22 09:04:40,098 DEBUG 
[regionserver/ve0528:16020-MemStoreChunkPool Statistics] 
regionserver.ChunkCreator: data stats (chunk size=2097152): current pool 
size=92, created chunk count=458, reused chunk count=19139, reuseRatio=97.66%
349622 2018-03-22 09:04:40,099 DEBUG 
[regionserver/ve0528:16020-MemStoreChunkPool Statistics] 
regionserver.ChunkCreator: index stats (chunk size=209715): current pool 
size=0, created chunk count=0, reused chunk count=0, reuseRatio=0
....
{code}

Let me make the signal clean around what in-memory compaction is doing.

So, in-memory compaction by default helps a little when pure loading but when 
reads in the mix, writes are slightly less and reads are pulled down some. Let 
me post a graph. We are missing a bunch of read perf in hbase2.


was (Author: stack):
I set in-memory compaction to NONE 
({{hbase.hregion.compacting.memstore.type}}). This helped some w/ the reads but 
we are still far down from hbase1 with workloadb, 95% reads. Looking in log I 
can't tell that in-memory compaction if off. I still see logging.
{code}
...
260444 2018-03-21 22:46:35,800 INFO  
[StoreOpener-e2e92d8b2112d5103bbdba11c6164874-1] 
regionserver.CompactingMemStore: Setting in-memory flush size threshold to 
12.80 MB and immutable segments index to type=CHUNK_MAP
260445 2018-03-21 22:46:35,800 DEBUG 
[StoreOpener-e2e92d8b2112d5103bbdba11c6164874-1] regionserver.HStore: Memstore 
type=org.apache.hadoop.hbase.regionserver.CompactingMemStore
260446 2018-03-21 22:46:35,800 INFO  
[StoreOpener-730bdcdd82b4c5ebff04d371061e787f-1] 
regionserver.CompactingMemStore: Setting in-memory flush size threshold to 
12.80 MB and immutable segments index to type=CHUNK_MAP
260447 2018-03-21 22:46:35,801 DEBUG 
[StoreOpener-730bdcdd82b4c5ebff04d371061e787f-1] regionserver.HStore: Memstore 
type=org.apache.hadoop.hbase.regionserver.CompactingMemStore
...
349621 2018-03-22 09:04:40,098 DEBUG 
[regionserver/ve0528:16020-MemStoreChunkPool Statistics] 
regionserver.ChunkCreator: data stats (chunk size=2097152): current pool 
size=92, created chunk count=458, reused chunk count=19139, reuseRatio=97.66%
349622 2018-03-22 09:04:40,099 DEBUG 
[regionserver/ve0528:16020-MemStoreChunkPool Statistics] 
regionserver.ChunkCreator: index stats (chunk size=209715): current pool 
size=0, created chunk count=0, reused chunk count=0, reuseRatio=0
....
{code}

Let me make the signal clean around what in-memory compaction is doing.

So, in-memory compaction by default helps a little when pure loading but when 
reads in the mix, writes are slightly less and reads are pulled down some. Let 
me post a graph.

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: ITBLL2.5B_1.2.7vs2.0.0_cpu.png, 
> ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_MEMSTORE.png, YCSB_OPs.png, 
> YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, flamegraph-1072.1.svg, 
> flamegraph-1072.2.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to