[
https://issues.apache.org/jira/browse/HBASE-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698510#action_12698510
]
stack commented on HBASE-1058:
------------------------------
So, this patch schedules a compaction if > threshold files at flush time. It
will not flush until we are below threshold? So it holds the flush up in
memory until compaction completes? This makes it more likely the 'block client
writes' gate comes down? It makes it so our blocking writes gate now comes
down not only if we are > flush size times 2 (default) but also if we are >
number of files are > threshold?
Just trying to understand.
J-D and Andrew, could it be that this patch will make us more stable at the
cost of slowing update rate?
> Prevent runaway compactions
> ---------------------------
>
> Key: HBASE-1058
> URL: https://issues.apache.org/jira/browse/HBASE-1058
> Project: Hadoop HBase
> Issue Type: Bug
> Reporter: stack
> Assignee: Andrew Purtell
> Priority: Blocker
> Fix For: 0.20.0
>
> Attachments: hbase-1058-v2.patch, hbase-1058-v3.patch,
> hbase-1058.patch
>
>
> A rabid upload will easily outrun our compaction ability dropping flushes
> faster than we can compact them up. Fix.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.