[ https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741574#comment-15741574 ]
Anastasia Braginsky commented on HBASE-17081: --------------------------------------------- Hi All! There happened some delay here due to traveling to SFO and giving a small talk there about what we are doing here with memory flushes and compaction. I attach the presentation here, you might be interested to look on the last read performance graphs. Hereby (and on RB), I attach the last (really last) patch! :-) I have referenced all the comments in the RB. As I know you are not getting updated on my answers there, I take this to encourage you to take a look on my answers there. The important difference in the last patch is that the composite snapshot is turned to be always true (both for IC and DC). This is because we have seen a great improvement in read latencies, after combining also DC with the composite snapshot. Any other changes can go in a different JIRA, please commit this one! Thanks, Anastasia > Flush the entire CompactingMemStore content to disk > --------------------------------------------------- > > Key: HBASE-17081 > URL: https://issues.apache.org/jira/browse/HBASE-17081 > Project: HBase > Issue Type: Sub-task > Reporter: Anastasia Braginsky > Assignee: Anastasia Braginsky > Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, > HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, > HBASE-17081-V06.patch, HBaseMeetupDecember2016-V02.pptx, > Pipelinememstore_fortrunk_3.patch > > > Part of CompactingMemStore's memory is held by an active segment, and another > part is divided between immutable segments in the compacting pipeline. Upon > flush-to-disk request we want to flush all of it to disk, in contrast to > flushing only tail of the compacting pipeline. -- This message was sent by Atlassian JIRA (v6.3.4#6332)