[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16403920#comment-16403920
 ] 

Anastasia Braginsky commented on HBASE-20188:
---------------------------------------------

{quote}With default memstore, it just a matter of iterating over a map and 
write cells. But now we have to read from multiple segments in a heap way and 
so more compares there.
{quote}
Not surprised with the performance degradation. I saw the flushes not fast 
enough in all the tests I have done. But I do not think it is due to multiple 
segments in the snapshot, at least I tried to run it with 
hbase.hregion.compacting.pipeline.segments.limit equals 1 (single segment per 
snapshot) and saw no much difference. But you can try it yourself, of course.
{quote}  I was wondering what your thoughts were regards our doing MORE 
aggressive in-memory compaction moving Cells from CSLM to your flat structures, 
could it save on the number of overall compares (and hence CPU) or, if not on 
compares, overhead from the CSLM itself shows up as a pretty big CPU user too? 
What you reckon?
{quote}
Indeed (as Eshcar said) the only way to do more in-memory compaction and 
transfer more cells to immutable segments (out of CSLM) is to use 
hbase.memstore.inmemoryflush.threshold.factor to be as small percentage as 
possible. But if you also keep hbase.hregion.compacting.pipeline.segments.limit 
equals 1, it might end with too much segment merges. But we can check this 
option as well.

 

Bottom line, when I was trying write-only scenario on both SSD and HDD I saw it 
consistently hitting "global pressure" mark. Is the Eshcar's fix to HBASE-18294 
included?

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: flamegraph-1072.1.svg, flamegraph-1072.2.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to