Re: Bug in FIFOCompactionPolicy pre-checks?
Cool, thanks Vlad. Filed https://issues.apache.org/jira/browse/HBASE-18526 On Fri, Aug 4, 2017 at 7:53 PM, Vladimir Rodionovwrote: > Yes, file a JIRA, Lars > > I will take a look > > -Vlad > > > On Thu, Aug 3, 2017 at 11:41 PM, Lars George wrote: > >> Hi, >> >> See https://issues.apache.org/jira/browse/HBASE-14468 >> >> It adds this check to {{HMaster.checkCompactionPolicy()}}: >> >> {code} >> // 1. Check TTL >> if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) { >> message = "Default TTL is not supported for FIFO compaction"; >> throw new IOException(message); >> } >> >> // 2. Check min versions >> if (hcd.getMinVersions() > 0) { >> message = "MIN_VERSION > 0 is not supported for FIFO compaction"; >> throw new IOException(message); >> } >> >> // 3. blocking file count >> String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY); >> if (sbfc != null) { >> blockingFileCount = Integer.parseInt(sbfc); >> } >> if (blockingFileCount < 1000) { >> message = >> "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " >> + blockingFileCount >> + " is below recommended minimum of 1000"; >> throw new IOException(message); >> } >> {code} >> >> Why does it only check the blocking file count on the HTD level, while >> others are check on the HCD level? Doing this for example fails >> because of it: >> >> {noformat} >> hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300, >> CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class' >> => 'org.apache.hadoop.hbase.regionserver.compactions. >> FIFOCompactionPolicy', >> 'hbase.hstore.blockingStoreFiles' => 2000 } } >> >> ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file >> count 'hbase.hstore.blockingStoreFiles' 10 is below recommended >> minimum of 1000 Set hbase.table.sanity.checks to false at conf or >> table descriptor if you want to bypass sanity checks >> at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure >> (HMaster.java:1782) >> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor( >> HMaster.java:1663) >> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545) >> at org.apache.hadoop.hbase.master.MasterRpcServices. >> createTable(MasterRpcServices.java:469) >> at org.apache.hadoop.hbase.protobuf.generated. >> MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549) >> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) >> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) >> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run( >> RpcExecutor.java:188) >> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run( >> RpcExecutor.java:168) >> Caused by: java.io.IOException: blocking file count >> 'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of >> 1000 >> at org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster. >> java:1773) >> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor( >> HMaster.java:1661) >> ... 7 more >> {noformat} >> >> That should work on the column family level, right? Shall I file a JIRA? >> >> Cheers, >> Lars >>
Re: Bug in FIFOCompactionPolicy pre-checks?
Yes, file a JIRA, Lars I will take a look -Vlad On Thu, Aug 3, 2017 at 11:41 PM, Lars Georgewrote: > Hi, > > See https://issues.apache.org/jira/browse/HBASE-14468 > > It adds this check to {{HMaster.checkCompactionPolicy()}}: > > {code} > // 1. Check TTL > if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) { > message = "Default TTL is not supported for FIFO compaction"; > throw new IOException(message); > } > > // 2. Check min versions > if (hcd.getMinVersions() > 0) { > message = "MIN_VERSION > 0 is not supported for FIFO compaction"; > throw new IOException(message); > } > > // 3. blocking file count > String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY); > if (sbfc != null) { > blockingFileCount = Integer.parseInt(sbfc); > } > if (blockingFileCount < 1000) { > message = > "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " > + blockingFileCount > + " is below recommended minimum of 1000"; > throw new IOException(message); > } > {code} > > Why does it only check the blocking file count on the HTD level, while > others are check on the HCD level? Doing this for example fails > because of it: > > {noformat} > hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300, > CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class' > => 'org.apache.hadoop.hbase.regionserver.compactions. > FIFOCompactionPolicy', > 'hbase.hstore.blockingStoreFiles' => 2000 } } > > ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file > count 'hbase.hstore.blockingStoreFiles' 10 is below recommended > minimum of 1000 Set hbase.table.sanity.checks to false at conf or > table descriptor if you want to bypass sanity checks > at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure > (HMaster.java:1782) > at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor( > HMaster.java:1663) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545) > at org.apache.hadoop.hbase.master.MasterRpcServices. > createTable(MasterRpcServices.java:469) > at org.apache.hadoop.hbase.protobuf.generated. > MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run( > RpcExecutor.java:188) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run( > RpcExecutor.java:168) > Caused by: java.io.IOException: blocking file count > 'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of > 1000 > at org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster. > java:1773) > at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor( > HMaster.java:1661) > ... 7 more > {noformat} > > That should work on the column family level, right? Shall I file a JIRA? > > Cheers, > Lars >
Bug in FIFOCompactionPolicy pre-checks?
Hi, See https://issues.apache.org/jira/browse/HBASE-14468 It adds this check to {{HMaster.checkCompactionPolicy()}}: {code} // 1. Check TTL if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) { message = "Default TTL is not supported for FIFO compaction"; throw new IOException(message); } // 2. Check min versions if (hcd.getMinVersions() > 0) { message = "MIN_VERSION > 0 is not supported for FIFO compaction"; throw new IOException(message); } // 3. blocking file count String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY); if (sbfc != null) { blockingFileCount = Integer.parseInt(sbfc); } if (blockingFileCount < 1000) { message = "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " + blockingFileCount + " is below recommended minimum of 1000"; throw new IOException(message); } {code} Why does it only check the blocking file count on the HTD level, while others are check on the HCD level? Doing this for example fails because of it: {noformat} hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300, CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class' => 'org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy', 'hbase.hstore.blockingStoreFiles' => 2000 } } ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file count 'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of 1000 Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1782) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1663) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) Caused by: java.io.IOException: blocking file count 'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of 1000 at org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster.java:1773) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1661) ... 7 more {noformat} That should work on the column family level, right? Shall I file a JIRA? Cheers, Lars