Lars George created HBASE-18526:
-----------------------------------

             Summary: FIFOCompactionPolicy pre-check uses wrong scope
                 Key: HBASE-18526
                 URL: https://issues.apache.org/jira/browse/HBASE-18526
             Project: HBase
          Issue Type: Bug
          Components: master
    Affects Versions: 1.3.1
            Reporter: Lars George


See https://issues.apache.org/jira/browse/HBASE-14468

It adds this check to {{HMaster.checkCompactionPolicy()}}:

{code}
// 1. Check TTL
if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) {
  message = "Default TTL is not supported for FIFO compaction";
  throw new IOException(message);
}

// 2. Check min versions
if (hcd.getMinVersions() > 0) {
  message = "MIN_VERSION > 0 is not supported for FIFO compaction";
  throw new IOException(message);
}

// 3. blocking file count
String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY);
if (sbfc != null) {
  blockingFileCount = Integer.parseInt(sbfc);
}
if (blockingFileCount < 1000) {
  message =
      "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' "
+ blockingFileCount
          + " is below recommended minimum of 1000";
  throw new IOException(message);
}
{code}

Why does it only check the blocking file count on the HTD level, while
others are check on the HCD level? Doing this for example fails
because of it:

{noformat}
hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300,
CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class'
=> 'org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy',
'hbase.hstore.blockingStoreFiles' => 2000 } }

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file
count 'hbase.hstore.blockingStoreFiles' 10 is below recommended
minimum of 1000 Set hbase.table.sanity.checks to false at conf or
table descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1782)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1663)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.io.IOException: blocking file count
'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of
1000
at 
org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster.java:1773)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1661)
... 7 more
{noformat}

The check should be performed on the column family level instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to