[
https://issues.apache.org/jira/browse/HBASE-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16498459#comment-16498459
]
Nihal Jain edited comment on HBASE-20662 at 6/1/18 7:41 PM:
------------------------------------------------------------
{quote}Master performing enabling/disabling of the table seems to be a cleaner
solution instead of all the RSes trying to do the same.
{quote}
Right, but I think we should create another JIRA to handle that; as the root
cause of this issue is not multiple RSs submitting the enable request
concurrently, but the problem is enable method not checking whether quota is in
violation before throwing exception.
Though I can handle this change as part of this JIRA too, if required.
was (Author: nihaljain.cs):
{quote}Master performing enabling/disabling of the table seems to be a cleaner
solution instead of all the RSes trying to do the same.
{quote}
Right, but I think we should create another JIRA to handle that; as the root
cause of this issue is not multiple RSs submitting the enable request
concurrently, but the problem is enable method not checking whether quota is in
violation before throwing exception.
> Increasing space quota on a violated table does not remove
> SpaceViolationPolicy.DISABLE enforcement
> ---------------------------------------------------------------------------------------------------
>
> Key: HBASE-20662
> URL: https://issues.apache.org/jira/browse/HBASE-20662
> Project: HBase
> Issue Type: Bug
> Reporter: Nihal Jain
> Assignee: Nihal Jain
> Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20662.master.001.patch
>
>
> *Steps to reproduce*
> * Create a table and set quota with {{SpaceViolationPolicy.DISABLE}} having
> limit say 2MB
> * Now put rows until space quota is violated and table gets disabled
> * Next, increase space quota with limit say 4MB on the table
> * Now try putting a row into the table
> {code:java}
> private void testSetQuotaThenViolateAndFinallyIncreaseQuota() throws
> Exception {
> SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE;
> Put put = new Put(Bytes.toBytes("to_reject"));
> put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1),
> Bytes.toBytes("to"),
> Bytes.toBytes("reject"));
> // Do puts until we violate space policy
> final TableName tn = writeUntilViolationAndVerifyViolation(policy, put);
> // Now, increase limit
> setQuotaLimit(tn, policy, 4L);
> // Put some row now: should not violate as quota limit increased
> verifyNoViolation(policy, tn, put);
> }
> {code}
> *Expected*
> We should be able to put data as long as newly set quota limit is not reached
> *Actual*
> We fail to put any new row even after increasing limit
> *Root cause*
> Increasing quota on a violated table triggers the table to be enabled, but
> since the table is already in violation, the system does not allow it to be
> enabled (may be thinking that a user is trying to enable it)
> *Relevant exception trace*
> {noformat}
> 2018-05-31 00:34:27,563 INFO [regionserver/root1-ThinkPad-T440p:0.Chore.1]
> client.HBaseAdmin$14(844): Started enable of
> testSetQuotaAndThenIncreaseQuotaWithDisable0
> 2018-05-31 00:34:27,571 DEBUG
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42525]
> ipc.CallRunner(142): callId: 11 service: MasterService methodName:
> EnableTable size: 104 connection: 127.0.0.1:38030 deadline: 1527707127568,
> exception=org.apache.hadoop.hbase.security.AccessDeniedException: Enabling
> the table 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to
> a violated space quota.
> 2018-05-31 00:34:27,571 ERROR [regionserver/root1-ThinkPad-T440p:0.Chore.1]
> quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation
> policy for testSetQuotaAndThenIncreaseQuotaWithDisable0. This table will
> remain in violation.
> org.apache.hadoop.hbase.security.AccessDeniedException:
> org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the table
> 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a
> violated space quota.
> at org.apache.hadoop.hbase.master.HMaster$6.run(HMaster.java:2275)
> at
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
> at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:2258)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.enableTable(MasterRpcServices.java:725)
> at
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
> at
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
> at
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:348)
> at
> org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3061)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3053)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.enableTableAsync(HBaseAdmin.java:839)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.enableTable(HBaseAdmin.java:833)
> at
> org.apache.hadoop.hbase.quotas.policies.DisableTableViolationPolicyEnforcement.disable(DisableTableViolationPolicyEnforcement.java:62)
> at
> org.apache.hadoop.hbase.quotas.RegionServerSpaceQuotaManager.disableViolationPolicyEnforcement(RegionServerSpaceQuotaManager.java:208)
> at
> org.apache.hadoop.hbase.quotas.SpaceQuotaRefresherChore.chore(SpaceQuotaRefresherChore.java:110)
> at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by:
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
> org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the table
> 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a
> violated space quota.
> at org.apache.hadoop.hbase.master.HMaster$6.run(HMaster.java:2275)
> at
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
> at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:2258)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.enableTable(MasterRpcServices.java:725)
> at
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
> at
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:161)
> at
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:191)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
> at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:801)
> at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404)
> at
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> at
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
> ... 1 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)