[
https://issues.apache.org/jira/browse/HBASE-3969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13053045#comment-13053045
]
stack commented on HBASE-3969:
------------------------------
You are using this.instance.conf.getInt("hbase.hstore.blockingStoreFiles", 7)
to indicate that we should use the default priority? If so, why not just use a
-1 altogether? The mention of "hbase.hstore.blockingStoreFiles" I think will
confuse folks who come along and read this code later.
Something like this:
{code}
Index: org/apache/hadoop/hbase/regionserver/HRegionServer.java
===================================================================
--- org/apache/hadoop/hbase/regionserver/HRegionServer.java (revision
1138255)
+++ org/apache/hadoop/hbase/regionserver/HRegionServer.java (working copy)
@@ -1050,12 +1050,22 @@
*/
private static class CompactionChecker extends Chore {
private final HRegionServer instance;
+ private final int majorCompactPriority;
+ private final static int DEFAULT_PRIORITY = -1;
CompactionChecker(final HRegionServer h, final int sleepTime,
final Stoppable stopper) {
super("CompactionChecker", sleepTime, h);
this.instance = h;
LOG.info("Runs every " + StringUtils.formatTime(sleepTime));
+
+ /* MajorCompactPriority is configurable.
+ * If not set, it will get the value of hbase.hstore.blockingStoreFiles,
+ * and the compaction will use default priority.
+ */
+ this.majorCompactPriority = this.instance.conf.
+ getInt("hbase.regionserver.compactionChecker.majorCompactPriority",
+ DEFAULT_PRIORITY);
}
@Override
@@ -1065,10 +1075,19 @@
continue;
for (Store s : r.getStores().values()) {
try {
- if (s.isMajorCompaction() || s.needsCompaction()) {
+ if (s.needsCompaction()) {
// Queue a compaction. Will recognize if major is needed.
this.instance.compactSplitThread.requestCompaction(r, s,
- getName() + " requests major compaction");
+ getName() + " requests compaction");
+ } else if (s.isMajorCompaction()) {
+ if (majorCompactPriority == DEFAULT_PRIORITY ) {
+ this.instance.compactSplitThread.requestCompaction(r, s,
+ getName() + " requests major compaction; use default
priority");
+ } else {
+ this.instance.compactSplitThread.requestCompaction(r, s,
+ getName() + " requests major compaction; use configured
priority",
+ this.majorCompactPriority);
+ }
}
} catch (IOException e) {
LOG.warn("Failed major compaction check on " + r, e);
{code}
Does this give you what you want? If so, I'll commit (I can make it work for
branch too).
> Outdated data can not be cleaned in time
> ----------------------------------------
>
> Key: HBASE-3969
> URL: https://issues.apache.org/jira/browse/HBASE-3969
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Affects Versions: 0.90.1, 0.90.2, 0.90.3
> Reporter: zhoushuaifeng
> Fix For: 0.90.4
>
> Attachments: HBASE-3969-solution1-for-branch-v2.patch,
> HBASE-3969-solution1-for-branch-v3.patch,
> HBASE-3969-solution1-for-branch.patch,
> HBASE-3969-solution1-for-trunk-v2.patch,
> HBASE-3969-solution1-for-trunk-v3.patch, HBASE-3969-solution1.patch
>
>
> Compaction checker will send regions to the compact queue to do compact. But
> the priority of these regions is too low if these regions have only a few
> storefiles. When there is large through output, and the compact queue will
> aways have some regions with higher priority. This may causing the major
> compact be delayed for a long time(even a few days), and outdated data
> cleaning will also be delayed.
> In our test case, we found some regions sent to the queue by major compact
> checker hunging in the queue for more than 2 days! Some scanners on these
> regions cannot get availably data for a long time and lease expired.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira