[
https://issues.apache.org/jira/browse/HBASE-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
jackylau updated HBASE-23149:
-----------------------------
Description:
we can know that the major compaction is skipping from the below
regionserver's log, but it is compacting that region.
and read the code and find it is not correct and i add mark "/*** ***/" below
public boolean shouldPerformMajorCompaction(final Collection<StoreFile>
filesToCompact)
throws IOException {
if (lowTimestamp > 0L && lowTimestamp < (now - mcTime)) {
if (filesToCompact.size() == 1) {
if (sf.isMajorCompaction() && (cfTTL == Long.MAX_VALUE || oldest < cfTTL)) {
float blockLocalityIndex =
sf.getHDFSBlockDistribution().getBlockLocalityIndex(
RSRpcServices.getHostname(comConf.conf, false));
if (blockLocalityIndex < comConf.getMinLocalityToForceCompact())
{ result = true; }
else
{ LOG.debug("Skipping major compaction of " + regionInfo + " because one
(major) compacted file only, oldestTime " + oldest + "ms is < TTL=" + cfTTL + "
and blockLocalityIndex is " + blockLocalityIndex + " (min " +
comConf.getMinLocalityToForceCompact() + ")"); }
} else if (cfTTL != HConstants.FOREVER && oldest > cfTTL)
{ LOG.debug("Major compaction triggered on store " + regionInfo + ", because
keyvalues outdated; time since last major compaction " + (now - lowTimestamp) +
"ms"); result = true; }
} else
{ LOG.debug("Major compaction triggered on store " + regionInfo + "; time since
last major compaction " + (now - lowTimestamp) + "ms"); }
result = true; /******here it is not right, it should be move to the above
*****/
}
return result;
}
2019-09-27 09:09:35,960 DEBUG [st129,16020,1568236573216_ChoreService_1]
compactions.RatioBasedCompactionPolicy: Skipping major compaction of
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
because one (major) compacted file only, oldestTime 3758085589ms is <
TTL=9223372036854775807 and blockLocalityIndex is 1.0 (min 0.0)
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
compactions.SortedCompactionPolicy: Selecting compaction from 1 store files, 0
compacting, 1 eligible, 100 blocking
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
regionserver.HStore: 413a563092544e8df480fd601b2de71b - d: Initiating major
compaction (all files)
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
regionserver.CompactSplitThread: Large Compaction requested:
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext@4b5582f1;
Because: CompactionChecker requests major compaction; use default priority;
compaction_queue=(1:0), split_queue=0, merge_queue=0
2019-09-27 09:09:35,961 INFO
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
regionserver.HRegion: Starting compaction on d in region
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
2019-09-27 09:09:35,961 INFO
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
regionserver.HStore: Starting compaction of 1 file(s) in d of
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
into
tmpdir=hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/.tmp,
totalSize=5.1 G
2019-09-27 09:09:35,961 DEBUG
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
compactions.Compactor: Compacting
hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/d/3b4080f9b6f149e1b0a476058c8564e6,
keycount=83914030, bloomtype=NONE, size=5.1 G, encoding=FAST_DIFF,
compression=SNAPPY, seqNum=2621061, earliestPutTs=1565788490371
was:
we can know that the major compaction is skipping but it is compacting that
region from the below regionserver's log.
and read the code and find it is not correct and i add mark "/*** ***/" below
public boolean shouldPerformMajorCompaction(final Collection<StoreFile>
filesToCompact)
throws IOException {
if (lowTimestamp > 0L && lowTimestamp < (now - mcTime)) {
if (filesToCompact.size() == 1) {
if (sf.isMajorCompaction() && (cfTTL == Long.MAX_VALUE || oldest < cfTTL)) {
float blockLocalityIndex =
sf.getHDFSBlockDistribution().getBlockLocalityIndex(
RSRpcServices.getHostname(comConf.conf, false));
if (blockLocalityIndex < comConf.getMinLocalityToForceCompact()) {
result = true;
} else {
LOG.debug("Skipping major compaction of " + regionInfo
+ " because one (major) compacted file only, oldestTime " + oldest
+ "ms is < TTL=" + cfTTL + " and blockLocalityIndex is " + blockLocalityIndex
+ " (min " + comConf.getMinLocalityToForceCompact() + ")");
}
} else if (cfTTL != HConstants.FOREVER && oldest > cfTTL) {
LOG.debug("Major compaction triggered on store " + regionInfo
+ ", because keyvalues outdated; time since last major compaction "
+ (now - lowTimestamp) + "ms");
result = true;
}
} else {
LOG.debug("Major compaction triggered on store " + regionInfo
+ "; time since last major compaction " + (now - lowTimestamp) + "ms");
}
result = true; /******here it is not right, it should be move to the above
*****/
}
return result;
}
2019-09-27 09:09:35,960 DEBUG [st129,16020,1568236573216_ChoreService_1]
compactions.RatioBasedCompactionPolicy: Skipping major compaction of
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
because one (major) compacted file only, oldestTime 3758085589ms is <
TTL=9223372036854775807 and blockLocalityIndex is 1.0 (min 0.0)
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
compactions.SortedCompactionPolicy: Selecting compaction from 1 store files, 0
compacting, 1 eligible, 100 blocking
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
regionserver.HStore: 413a563092544e8df480fd601b2de71b - d: Initiating major
compaction (all files)
2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
regionserver.CompactSplitThread: Large Compaction requested:
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext@4b5582f1;
Because: CompactionChecker requests major compaction; use default priority;
compaction_queue=(1:0), split_queue=0, merge_queue=0
2019-09-27 09:09:35,961 INFO
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
regionserver.HRegion: Starting compaction on d in region
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
2019-09-27 09:09:35,961 INFO
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
regionserver.HStore: Starting compaction of 1 file(s) in d of
100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
into
tmpdir=hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/.tmp,
totalSize=5.1 G
2019-09-27 09:09:35,961 DEBUG
[regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
compactions.Compactor: Compacting
hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/d/3b4080f9b6f149e1b0a476058c8564e6,
keycount=83914030, bloomtype=NONE, size=5.1 G, encoding=FAST_DIFF,
compression=SNAPPY, seqNum=2621061, earliestPutTs=1565788490371
> hbase shouldPerformMajorCompaction logic is not correct
> -------------------------------------------------------
>
> Key: HBASE-23149
> URL: https://issues.apache.org/jira/browse/HBASE-23149
> Project: HBase
> Issue Type: Bug
> Components: Compaction
> Affects Versions: 1.4.9
> Reporter: jackylau
> Priority: Major
> Fix For: 1.4.11
>
>
> we can know that the major compaction is skipping from the below
> regionserver's log, but it is compacting that region.
> and read the code and find it is not correct and i add mark "/*** ***/"
> below
>
> public boolean shouldPerformMajorCompaction(final Collection<StoreFile>
> filesToCompact)
> throws IOException {
> if (lowTimestamp > 0L && lowTimestamp < (now - mcTime)) {
> if (filesToCompact.size() == 1) {
> if (sf.isMajorCompaction() && (cfTTL == Long.MAX_VALUE || oldest < cfTTL)) {
> float blockLocalityIndex =
> sf.getHDFSBlockDistribution().getBlockLocalityIndex(
> RSRpcServices.getHostname(comConf.conf, false));
> if (blockLocalityIndex < comConf.getMinLocalityToForceCompact())
> { result = true; }
> else
> { LOG.debug("Skipping major compaction of " + regionInfo + " because one
> (major) compacted file only, oldestTime " + oldest + "ms is < TTL=" + cfTTL +
> " and blockLocalityIndex is " + blockLocalityIndex + " (min " +
> comConf.getMinLocalityToForceCompact() + ")"); }
> } else if (cfTTL != HConstants.FOREVER && oldest > cfTTL)
> { LOG.debug("Major compaction triggered on store " + regionInfo + ", because
> keyvalues outdated; time since last major compaction " + (now - lowTimestamp)
> + "ms"); result = true; }
> } else
> { LOG.debug("Major compaction triggered on store " + regionInfo + "; time
> since last major compaction " + (now - lowTimestamp) + "ms"); }
> result = true; /******here it is not right, it should be move to the above
> *****/
> }
> return result;
> }
>
> 2019-09-27 09:09:35,960 DEBUG [st129,16020,1568236573216_ChoreService_1]
> compactions.RatioBasedCompactionPolicy: Skipping major compaction of
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
> because one (major) compacted file only, oldestTime 3758085589ms is <
> TTL=9223372036854775807 and blockLocalityIndex is 1.0 (min 0.0)
> 2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
> compactions.SortedCompactionPolicy: Selecting compaction from 1 store files,
> 0 compacting, 1 eligible, 100 blocking
> 2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
> regionserver.HStore: 413a563092544e8df480fd601b2de71b - d: Initiating major
> compaction (all files)
> 2019-09-27 09:09:35,961 DEBUG [st129,16020,1568236573216_ChoreService_1]
> regionserver.CompactSplitThread: Large Compaction requested:
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext@4b5582f1;
> Because: CompactionChecker requests major compaction; use default priority;
> compaction_queue=(1:0), split_queue=0, merge_queue=0
> 2019-09-27 09:09:35,961 INFO
> [regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
> regionserver.HRegion: Starting compaction on d in region
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
> 2019-09-27 09:09:35,961 INFO
> [regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
> regionserver.HStore: Starting compaction of 1 file(s) in d of
> 100E_POINT_point_2ddata_z3_geom_GpsTime_v6,\x17,1568215725799.413a563092544e8df480fd601b2de71b.
> into
> tmpdir=hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/.tmp,
> totalSize=5.1 G
> 2019-09-27 09:09:35,961 DEBUG
> [regionserver/st129/10.3.72.129:16020-longCompactions-1568236575579]
> compactions.Compactor: Compacting
> hdfs://st129:8020/hbase/data/default/100E_POINT_point_2ddata_z3_geom_GpsTime_v6/413a563092544e8df480fd601b2de71b/d/3b4080f9b6f149e1b0a476058c8564e6,
> keycount=83914030, bloomtype=NONE, size=5.1 G, encoding=FAST_DIFF,
> compression=SNAPPY, seqNum=2621061, earliestPutTs=1565788490371
--
This message was sent by Atlassian Jira
(v8.3.4#803005)