[
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15708264#comment-15708264
]
Anastasia Braginsky commented on HBASE-17081:
---------------------------------------------
Thank you [~stack] for your review! Hereby are my answers.
bq. How does order work in the above? Should we stop if it goes negative? Can
it go negative? We do the above pattern in another place at least in the patch.
So each scanner in the list of scanners should have an order number according
to which one has the newer data. For example, the active segment has the
highest order (biggest number) because its data is the freshest one. Then
pipeline segments have the decreasing order from head to tail. If snapshot is
represented as a single segment its order is 0. If this is a composite
snapshot, then again the decreasing order from head to tail. The order
shouldn't get to negative as it is initialized exactly according to the amount
of segments in the memstore. However, I am adding a check for that to me on the
safe side. This is not something new, we had this loop with decreasing order
when dealing with pipeline segments before this patch.
bq. to return new LinkedList<Segment>(segments); No need to park in the local
res variable (This is done in a few places in the patch).
Fixed. In multiple places.
bq. Why KeyValueScanner instead of SegmentScanner? CellScanner?
Just to make MemStoreScanner and SegmentScanner to be from the same type as we
are now interchange between them. I have now taken a look on CellScanner
interface and it is much more lean interface than KeyValueScanner.
> Flush the entire CompactingMemStore content to disk
> ---------------------------------------------------
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
> Issue Type: Sub-task
> Reporter: Anastasia Braginsky
> Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch,
> HBASE-17081-V03.patch, HBASE-17081-V04.patch,
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another
> part is divided between immutable segments in the compacting pipeline. Upon
> flush-to-disk request we want to flush all of it to disk, in contrast to
> flushing only tail of the compacting pipeline.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)