[jira] [Updated] (HBASE-27528) log duplication issues in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27528: Description: MasterRpcServices record audit log in privileged operations (grant, revoke) and vital apis like "execMasterService". {code:java} public RevokeResponse revoke(RpcController controller, RevokeRequest request) throws ServiceException { try { .. server.cpHost.preRevoke(userPermission); // has audit log in AccessChecker .. // removeUserPermission User caller = RpcServer.getRequestUser().orElse(null); if (AUDITLOG.isTraceEnabled()) { // audit log should record all permission changes String remoteAddress = RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); AUDITLOG.trace("User {} (remote address: {}) revoked permission {}", caller, remoteAddress, userPermission); } .. } {code} but I found a path from *server.cpHost.preRevoke(userPermission);* to {*}AccessChecker audit log{*}, which caused {*}log duplication{*}. *_+grant/revoke -> AccessController.preGrant/Revoke -> preGrantOrRevoke -> AccessChecker.requireGlobalPermission/... -> logResult+_* {code:java} public void requireGlobalPermission(User user, String request, Action perm, String namespace) throws IOException { AuthResult authResult; if (authManager.authorizeUserGlobal(user, perm)) { authResult = AuthResult.allow(request, "Global check allowed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); } else { authResult = AuthResult.deny(request, "Global check failed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); throw new AccessDeniedException( "Insufficient permissions for user '" + (user != null ? user.getShortName() : "null") + "' (global, action=" + perm.toString() + ")"); } } public static void logResult(AuthResult result) { if (AUDITLOG.isTraceEnabled()) { User user = result.getUser(); UserGroupInformation ugi = user != null ? user.getUGI() : null; AUDITLOG.trace( "Access {} for user {}; reason: {}; remote address: {}; request: {}; context: {};" + "auth method: {}", (result.isAllowed() ? "allowed" : "denied"), (user != null ? user.getShortName() : "UNKNOWN"), result.getReason(), RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""), result.getRequest(), result.toContextString(), ugi != null ? ugi.getAuthenticationMethod() : "UNKNOWN"); } } {code} Since *AccessChecker* integrates auditlogs for permission check, I'll delete the log in MasterRpcServices. There must be more problems like this, I' ll check it later and commit the code. -There are many "write" operations like "deleteTable", which may cause security problems, should also record an audit log.- was: MasterRpcServices record audit log in privileged operations (grant, revoke) and vital apis like "execMasterService". {code:java} public RevokeResponse revoke(RpcController controller, RevokeRequest request) throws ServiceException { try { .. server.cpHost.preRevoke(userPermission); // has audit log in AccessChecker .. // removeUserPermission User caller = RpcServer.getRequestUser().orElse(null); if (AUDITLOG.isTraceEnabled()) { // audit log should record all permission changes String remoteAddress = RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); AUDITLOG.trace("User {} (remote address: {}) revoked permission {}", caller, remoteAddress, userPermission); } .. } {code} but I found a path from *server.cpHost.preRevoke(userPermission);* to {*}AccessChecker audit log{*}, which caused {*}log duplication{*}. {code:java} public void requireGlobalPermission(User user, String request, Action perm, String namespace) throws IOException { AuthResult authResult; if (authManager.authorizeUserGlobal(user, perm)) { authResult = AuthResult.allow(request, "Global check allowed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); } else { authResult = AuthResult.deny(request, "Global check failed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); throw new AccessDeniedException( "Insufficient permissions for user '" + (user != null ? user.getShortName() : "null") + "' (global, action=" + perm.toString() + ")"); } } public static void logResult(AuthResult result) { if (AUDITLOG.isTraceEnabled()) { User user = result.getUser(); UserGroupInformation ugi = user != null ? user.getUGI()
[jira] [Updated] (HBASE-27525) AccessChecker records auditlog at trace level, while RPCServices record allow at info, deny at warn.
[ https://issues.apache.org/jira/browse/HBASE-27525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27525: Summary: AccessChecker records auditlog at trace level, while RPCServices record allow at info, deny at warn. (was: AccessChecker records auditlog for AccessDeniedException while some classes are not.) > AccessChecker records auditlog at trace level, while RPCServices record allow > at info, deny at warn. > > > Key: HBASE-27525 > URL: https://issues.apache.org/jira/browse/HBASE-27525 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: thirdparty-4.1.3 >Reporter: Beibei Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27525) AccessChecker records auditlog at trace level, while RPCServices record allow at info, deny at warn.
[ https://issues.apache.org/jira/browse/HBASE-27525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27525: Issue Type: Improvement (was: Bug) > AccessChecker records auditlog at trace level, while RPCServices record allow > at info, deny at warn. > > > Key: HBASE-27525 > URL: https://issues.apache.org/jira/browse/HBASE-27525 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC >Affects Versions: thirdparty-4.1.3 >Reporter: Beibei Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27528) log duplication issues in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27528: Description: MasterRpcServices record audit log in privileged operations (grant, revoke) and vital apis like "execMasterService". {code:java} public RevokeResponse revoke(RpcController controller, RevokeRequest request) throws ServiceException { try { .. server.cpHost.preRevoke(userPermission); // has audit log in AccessChecker .. // removeUserPermission User caller = RpcServer.getRequestUser().orElse(null); if (AUDITLOG.isTraceEnabled()) { // audit log should record all permission changes String remoteAddress = RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); AUDITLOG.trace("User {} (remote address: {}) revoked permission {}", caller, remoteAddress, userPermission); } .. } {code} but I found a path from *server.cpHost.preRevoke(userPermission);* to {*}AccessChecker audit log{*}, which caused {*}log duplication{*}. {code:java} public void requireGlobalPermission(User user, String request, Action perm, String namespace) throws IOException { AuthResult authResult; if (authManager.authorizeUserGlobal(user, perm)) { authResult = AuthResult.allow(request, "Global check allowed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); } else { authResult = AuthResult.deny(request, "Global check failed", user, perm, null); authResult.getParams().setNamespace(namespace); logResult(authResult); throw new AccessDeniedException( "Insufficient permissions for user '" + (user != null ? user.getShortName() : "null") + "' (global, action=" + perm.toString() + ")"); } } public static void logResult(AuthResult result) { if (AUDITLOG.isTraceEnabled()) { User user = result.getUser(); UserGroupInformation ugi = user != null ? user.getUGI() : null; AUDITLOG.trace( "Access {} for user {}; reason: {}; remote address: {}; request: {}; context: {};" + "auth method: {}", (result.isAllowed() ? "allowed" : "denied"), (user != null ? user.getShortName() : "UNKNOWN"), result.getReason(), RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""), result.getRequest(), result.toContextString(), ugi != null ? ugi.getAuthenticationMethod() : "UNKNOWN"); } } {code} Since *AccessChecker* integrates auditlogs for permission check, I'll delete the log in MasterRpcServices. There must be more problems like this, I' ll check it later and commit the code. -There are many "write" operations like "deleteTable", which may cause security problems, should also record an audit log.- was: MasterRpcServices record audit log in privileged operations (grant, revoke) and vital apis like "execMasterService". {code:java} public ClientProtos.CoprocessorServiceResponse execMasterService(final RpcController controller, .. String remoteAddress = RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); User caller = RpcServer.getRequestUser().orElse(null); AUDITLOG.info("User {} (remote address: {}) master service request for {}.{}", caller, remoteAddress, serviceName, methodName); return CoprocessorRpcUtils.getResponse(execResult, HConstants.EMPTY_BYTE_ARRAY); } catch (IOException ie) { throw new ServiceException(ie); } } {code} There are many "write" operations like "deleteTable", which may cause security problems, should also record an audit log. {code:java} public DeleteTableResponse deleteTable(RpcController controller, DeleteTableRequest request) throws ServiceException { try { long procId = server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), request.getNonceGroup(), request.getNonce()); // an audit log is required here. return DeleteTableResponse.newBuilder().setProcId(procId).build(); } catch (IOException ioe) { throw new ServiceException(ioe); } } {code} > log duplication issues in MasterRpcServices > --- > > Key: HBASE-27528 > URL: https://issues.apache.org/jira/browse/HBASE-27528 > Project: HBase > Issue Type: Bug > Components: logging, master, rpc, security >Reporter: Beibei Zhao >Priority: Major > > MasterRpcServices record audit log in privileged operations (grant, revoke) > and vital apis like "execMasterService". > {code:java} > public RevokeResponse revoke(RpcController controller, RevokeRequest > request) > throws ServiceException { > try { > .. > server.cpHost.preRevoke(userPermission); //
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Issue Type: Brainstorming (was: Improvement) > NettyHBaseSaslRpcServerHandler.channelRead0 forget to record > "AUTH_FAILED_FOR" auditlog for an exception. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Brainstorming >Reporter: Beibei Zhao >Priority: Minor > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27528) log duplication issues in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27528: Issue Type: Bug (was: Improvement) > log duplication issues in MasterRpcServices > --- > > Key: HBASE-27528 > URL: https://issues.apache.org/jira/browse/HBASE-27528 > Project: HBase > Issue Type: Bug > Components: logging, master, rpc, security >Reporter: Beibei Zhao >Priority: Major > > MasterRpcServices record audit log in privileged operations (grant, revoke) > and vital apis like "execMasterService". > > {code:java} > public ClientProtos.CoprocessorServiceResponse execMasterService(final > RpcController controller, > .. > String remoteAddress = > RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); > User caller = RpcServer.getRequestUser().orElse(null); > AUDITLOG.info("User {} (remote address: {}) master service request for > {}.{}", caller, > remoteAddress, serviceName, methodName); > return CoprocessorRpcUtils.getResponse(execResult, > HConstants.EMPTY_BYTE_ARRAY); > } catch (IOException ie) { > throw new ServiceException(ie); > } > } > {code} > There are many "write" operations like "deleteTable", which may cause > security problems, should also record an audit log. > {code:java} > public DeleteTableResponse deleteTable(RpcController controller, > DeleteTableRequest request) > throws ServiceException { > try { > long procId = > server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), > request.getNonceGroup(), request.getNonce()); > // an audit log is required here. > return DeleteTableResponse.newBuilder().setProcId(procId).build(); > } catch (IOException ioe) { > throw new ServiceException(ioe); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Issue Type: Improvement (was: Bug) > NettyHBaseSaslRpcServerHandler.channelRead0 forget to record > "AUTH_FAILED_FOR" auditlog for an exception. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Improvement >Reporter: Beibei Zhao >Priority: Minor > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-27528) Add audit logs in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655803#comment-17655803 ] Beibei Zhao edited comment on HBASE-27528 at 1/8/23 3:38 PM: - [~bbeaudreault] Thanks for your reply! You are right! I found a path from *revoke* to *AccessChecker* (log for deny or allow for a request). So there is a *log duplication* issue, I' ll commit the code later. was (Author: JIRAUSER296385): [~bbeaudreault] Thanks for your reply! You are right! I found a path from *revoke* to *AccessChecker * (log for deny or allow for a request). So there is a log duplication issue, I' ll commit the code later. > Add audit logs in MasterRpcServices > --- > > Key: HBASE-27528 > URL: https://issues.apache.org/jira/browse/HBASE-27528 > Project: HBase > Issue Type: Improvement > Components: logging, master, rpc, security >Reporter: Beibei Zhao >Priority: Major > > MasterRpcServices record audit log in privileged operations (grant, revoke) > and vital apis like "execMasterService". > > {code:java} > public ClientProtos.CoprocessorServiceResponse execMasterService(final > RpcController controller, > .. > String remoteAddress = > RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); > User caller = RpcServer.getRequestUser().orElse(null); > AUDITLOG.info("User {} (remote address: {}) master service request for > {}.{}", caller, > remoteAddress, serviceName, methodName); > return CoprocessorRpcUtils.getResponse(execResult, > HConstants.EMPTY_BYTE_ARRAY); > } catch (IOException ie) { > throw new ServiceException(ie); > } > } > {code} > There are many "write" operations like "deleteTable", which may cause > security problems, should also record an audit log. > {code:java} > public DeleteTableResponse deleteTable(RpcController controller, > DeleteTableRequest request) > throws ServiceException { > try { > long procId = > server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), > request.getNonceGroup(), request.getNonce()); > // an audit log is required here. > return DeleteTableResponse.newBuilder().setProcId(procId).build(); > } catch (IOException ioe) { > throw new ServiceException(ioe); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27528) log duplication issues in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27528: Summary: log duplication issues in MasterRpcServices (was: Add audit logs in MasterRpcServices) > log duplication issues in MasterRpcServices > --- > > Key: HBASE-27528 > URL: https://issues.apache.org/jira/browse/HBASE-27528 > Project: HBase > Issue Type: Improvement > Components: logging, master, rpc, security >Reporter: Beibei Zhao >Priority: Major > > MasterRpcServices record audit log in privileged operations (grant, revoke) > and vital apis like "execMasterService". > > {code:java} > public ClientProtos.CoprocessorServiceResponse execMasterService(final > RpcController controller, > .. > String remoteAddress = > RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); > User caller = RpcServer.getRequestUser().orElse(null); > AUDITLOG.info("User {} (remote address: {}) master service request for > {}.{}", caller, > remoteAddress, serviceName, methodName); > return CoprocessorRpcUtils.getResponse(execResult, > HConstants.EMPTY_BYTE_ARRAY); > } catch (IOException ie) { > throw new ServiceException(ie); > } > } > {code} > There are many "write" operations like "deleteTable", which may cause > security problems, should also record an audit log. > {code:java} > public DeleteTableResponse deleteTable(RpcController controller, > DeleteTableRequest request) > throws ServiceException { > try { > long procId = > server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), > request.getNonceGroup(), request.getNonce()); > // an audit log is required here. > return DeleteTableResponse.newBuilder().setProcId(procId).build(); > } catch (IOException ioe) { > throw new ServiceException(ioe); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27528) Add audit logs in MasterRpcServices
[ https://issues.apache.org/jira/browse/HBASE-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655803#comment-17655803 ] Beibei Zhao commented on HBASE-27528: - [~bbeaudreault] Thanks for your reply! You are right! I found a path from *revoke* to *AccessChecker * (log for deny or allow for a request). So there is a log duplication issue, I' ll commit the code later. > Add audit logs in MasterRpcServices > --- > > Key: HBASE-27528 > URL: https://issues.apache.org/jira/browse/HBASE-27528 > Project: HBase > Issue Type: Improvement > Components: logging, master, rpc, security >Reporter: Beibei Zhao >Priority: Major > > MasterRpcServices record audit log in privileged operations (grant, revoke) > and vital apis like "execMasterService". > > {code:java} > public ClientProtos.CoprocessorServiceResponse execMasterService(final > RpcController controller, > .. > String remoteAddress = > RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); > User caller = RpcServer.getRequestUser().orElse(null); > AUDITLOG.info("User {} (remote address: {}) master service request for > {}.{}", caller, > remoteAddress, serviceName, methodName); > return CoprocessorRpcUtils.getResponse(execResult, > HConstants.EMPTY_BYTE_ARRAY); > } catch (IOException ie) { > throw new ServiceException(ie); > } > } > {code} > There are many "write" operations like "deleteTable", which may cause > security problems, should also record an audit log. > {code:java} > public DeleteTableResponse deleteTable(RpcController controller, > DeleteTableRequest request) > throws ServiceException { > try { > long procId = > server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), > request.getNonceGroup(), request.getNonce()); > // an audit log is required here. > return DeleteTableResponse.newBuilder().setProcId(procId).build(); > } catch (IOException ioe) { > throw new ServiceException(ioe); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Priority: Minor (was: Major) > NettyHBaseSaslRpcServerHandler.channelRead0 forget to record > "AUTH_FAILED_FOR" auditlog for an exception. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Minor > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao resolved HBASE-27526. - Resolution: Not A Problem > NettyHBaseSaslRpcServerHandler.channelRead0 forget to record > "AUTH_FAILED_FOR" auditlog for an exception. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27528) Add audit logs in MasterRpcServices
Beibei Zhao created HBASE-27528: --- Summary: Add audit logs in MasterRpcServices Key: HBASE-27528 URL: https://issues.apache.org/jira/browse/HBASE-27528 Project: HBase Issue Type: Improvement Components: logging, master, rpc, security Affects Versions: thirdparty-4.1.3 Reporter: Beibei Zhao MasterRpcServices record audit log in privileged operations (grant, revoke) and vital apis like "execMasterService". {code:java} public ClientProtos.CoprocessorServiceResponse execMasterService(final RpcController controller, .. String remoteAddress = RpcServer.getRemoteAddress().map(InetAddress::toString).orElse(""); User caller = RpcServer.getRequestUser().orElse(null); AUDITLOG.info("User {} (remote address: {}) master service request for {}.{}", caller, remoteAddress, serviceName, methodName); return CoprocessorRpcUtils.getResponse(execResult, HConstants.EMPTY_BYTE_ARRAY); } catch (IOException ie) { throw new ServiceException(ie); } } {code} There are many "write" operations like "deleteTable", which may cause security problems, should also record an audit log. {code:java} public DeleteTableResponse deleteTable(RpcController controller, DeleteTableRequest request) throws ServiceException { try { long procId = server.deleteTable(ProtobufUtil.toTableName(request.getTableName()), request.getNonceGroup(), request.getNonce()); // an audit log is required here. return DeleteTableResponse.newBuilder().setProcId(procId).build(); } catch (IOException ioe) { throw new ServiceException(ioe); } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Summary: NettyHBaseSaslRpcServerHandler.channelRead0 forget to record "AUTH_FAILED_FOR" auditlog for an exception. (was: NettyHBaseSaslRpcServerHandler.channelRead0 forget record "AUTH_FAILED_FOR" auditlog for an exception.) > NettyHBaseSaslRpcServerHandler.channelRead0 forget to record > "AUTH_FAILED_FOR" auditlog for an exception. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget record "AUTH_FAILED_FOR" auditlog for an exception.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Summary: NettyHBaseSaslRpcServerHandler.channelRead0 forget record "AUTH_FAILED_FOR" auditlog for an exception. (was: NettyHBaseSaslRpcServerHandler.channelRead0 forget an auditlog missing problem.) > NettyHBaseSaslRpcServerHandler.channelRead0 forget record "AUTH_FAILED_FOR" > auditlog for an exception. > -- > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler.channelRead0 forget an auditlog missing problem.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Summary: NettyHBaseSaslRpcServerHandler.channelRead0 forget an auditlog missing problem. (was: NettyHBaseSaslRpcServerHandler..) > NettyHBaseSaslRpcServerHandler.channelRead0 forget an auditlog missing > problem. > --- > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler..
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Summary: NettyHBaseSaslRpcServerHandler.. (was: NettyHBaseSaslRpcServerHandler implements exceptionCaught method but not use.) > NettyHBaseSaslRpcServerHandler.. > > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27526) NettyHBaseSaslRpcServerHandler implements exceptionCaught method but not use.
[ https://issues.apache.org/jira/browse/HBASE-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27526: Description: In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" after task is completed like this: {code:java} private void saslReadAndProcess(ByteBuff saslToken) throws IOException, InterruptedException { .. } catch (IOException e) { .. // attempting user could be null RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, saslServer.getAttemptingUser()); throw e; } .. if (saslServer.isComplete()) { .. RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); .. } } } {code} but NettyHBaseSaslRpcServerHandler.channelRead0 only record "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception without record "AUTH_FAILED_FOR": {code:java} protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception { .. if (saslServer.isComplete()) { conn.finishSaslNegotiation(); .. } } void finishSaslNegotiation() throws IOException { .. RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); } {code} So I think an exceptionCaught should be called here: {code:java} public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { LOG.error("Error when doing SASL handshade, provider={}", conn.provider, cause); Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), sendToClient.getLocalizedMessage()); rpcServer.metrics.authenticationFailure(); String clientIP = this.toString(); // attempting user could be null RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, conn.saslServer != null ? conn.saslServer.getAttemptingUser() : "Unknown"); NettyFutureUtils.safeClose(ctx); } {code} > NettyHBaseSaslRpcServerHandler implements exceptionCaught method but not use. > - > > Key: HBASE-27526 > URL: https://issues.apache.org/jira/browse/HBASE-27526 > Project: HBase > Issue Type: Bug >Reporter: Beibei Zhao >Priority: Major > > In other methods such as SimpleServerRpcConnection.saslReadAndProcess, they > always record "AUTH_FAILED_FOR" for an exception, and "AUTH_SUCCESSFUL_FOR" > after task is completed like this: > {code:java} > private void saslReadAndProcess(ByteBuff saslToken) throws IOException, > InterruptedException { > .. > } catch (IOException e) { > .. > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, > clientIP, > saslServer.getAttemptingUser()); > throw e; > } > .. > if (saslServer.isComplete()) { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > .. > } > } > } > {code} > but NettyHBaseSaslRpcServerHandler.channelRead0 only record > "AUTH_SUCCESSFUL_FOR" in finishSaslNegotiation, and just throw Exception > without record "AUTH_FAILED_FOR": > {code:java} > protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws > Exception { > .. > if (saslServer.isComplete()) { > conn.finishSaslNegotiation(); > .. > } > } > void finishSaslNegotiation() throws IOException { > .. > RpcServer.AUDITLOG.info(RpcServer.AUTH_SUCCESSFUL_FOR + ugi); > } > {code} > So I think an exceptionCaught should be called here: > {code:java} > public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) > throws Exception { > LOG.error("Error when doing SASL handshade, provider={}", conn.provider, > cause); > Throwable sendToClient = HBaseSaslRpcServer.unwrap(cause); > doResponse(ctx, SaslStatus.ERROR, null, sendToClient.getClass().getName(), > sendToClient.getLocalizedMessage()); > rpcServer.metrics.authenticationFailure(); > String clientIP = this.toString(); > // attempting user could be null > RpcServer.AUDITLOG.warn("{}{}: {}", RpcServer.AUTH_FAILED_FOR, clientIP, > conn.saslServer != null ? conn.saslServer.getAttemptingUser() : > "Unknown"); > NettyFutureUtils.safeClose(ctx); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27526) NettyHBaseSaslRpcServerHandler implements exceptionCaught method but not use.
Beibei Zhao created HBASE-27526: --- Summary: NettyHBaseSaslRpcServerHandler implements exceptionCaught method but not use. Key: HBASE-27526 URL: https://issues.apache.org/jira/browse/HBASE-27526 Project: HBase Issue Type: Bug Reporter: Beibei Zhao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27525) AccessChecker records auditlog for AccessDeniedException while some classes are not.
[ https://issues.apache.org/jira/browse/HBASE-27525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Beibei Zhao updated HBASE-27525: Component/s: IPC/RPC Affects Version/s: thirdparty-4.1.3 > AccessChecker records auditlog for AccessDeniedException while some classes > are not. > > > Key: HBASE-27525 > URL: https://issues.apache.org/jira/browse/HBASE-27525 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: thirdparty-4.1.3 >Reporter: Beibei Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27525) AccessChecker records auditlog for AccessDeniedException while some classes are not.
Beibei Zhao created HBASE-27525: --- Summary: AccessChecker records auditlog for AccessDeniedException while some classes are not. Key: HBASE-27525 URL: https://issues.apache.org/jira/browse/HBASE-27525 Project: HBase Issue Type: Bug Reporter: Beibei Zhao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-25322) Redundant Reference file in bottom region of split
[ https://issues.apache.org/jira/browse/HBASE-25322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440161#comment-17440161 ] Baiqiang Zhao commented on HBASE-25322: --- Hi [~Xiaolin Ha], This issue breaks branch-2, because branch-2 does not have HBaseTestingUtil class... > Redundant Reference file in bottom region of split > -- > > Key: HBASE-25322 > URL: https://issues.apache.org/jira/browse/HBASE-25322 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha-2 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Blocker > Fix For: 3.0.0-alpha-2, 2.4.9 > > > When we split a region ranges from (,), the bottom region should contain keys > of(,split key), and the top region should contain keys of [split key, ). > Currently, if we do the following operations: > # put rowkeys 100,101,102,103,104,105 to a table, and flush the memstore to > make a hfile with rowkeys 100,101,102,103,104,105; > # put rowkeys 200,201,202,203,204,205 to the table, and flush the memstore > to make a hfile with rowkeys 200,201,202,203,204,205; > # split the table region, using split key 200; > # then the bottom region will has two Reference files, while the top region > only has one. > But we expect the bottom region has only one Reference file as the the top > region. > That's because when generating Reference files in child region, the bottom > region used the `PrivateCellUtil.createLastOnRow(splitRow)` cell to compare > to first keys in the hfiles, while the top region used > `PrivateCellUtil.createFirstOnRow(splitRow)` cell to compare to last keys in > the hfiles. > `LastOnRow(splitRow)` means the maximum row generated by the split row, while > `FirstOnRow(splitRow)` means the minimus row generated by the split row. The > split row should be in the top region. And we should use > `FirstOnRow(splitRow)` compare to hfile first and last keys in both bottom > and top region. > Though the redundant Reference file will not be read by the bottom region, > the compaction of the redundant Reference file will result in empty file if > only this redundant Reference file participates in a compaction. > > > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26056) Cell TTL set to -1 should mean never expire, the same as CF TTL
Baiqiang Zhao created HBASE-26056: - Summary: Cell TTL set to -1 should mean never expire, the same as CF TTL Key: HBASE-26056 URL: https://issues.apache.org/jira/browse/HBASE-26056 Project: HBase Issue Type: Bug Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao We met a user who took it for granted that setting the cell TTL to -1 is the same as the CF TTL, which means that it will never expire. However, if the cell TTL is set to -1, the cell will expire immediately. In fact, if the cell TTL is not set, it means that it will never expire, but he just set it.. We should unify the meaning of cell TTL and CF TTL on the value of -1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25954) When graceful_stop a regionserver, RegionMover always retry to move the regions to other rsgoup's regionservers
[ https://issues.apache.org/jira/browse/HBASE-25954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17355507#comment-17355507 ] Baiqiang Zhao commented on HBASE-25954: --- HBASE-22740 was meant to solve this problem, but branch-2 did not solve it, it has some conflicts. If the regionservers in your rsgroup do not change frequently, you can try the designatedfile or excludefile parameters in graceful_stop.sh. > When graceful_stop a regionserver, RegionMover always retry to move the > regions to other rsgoup's regionservers > --- > > Key: HBASE-25954 > URL: https://issues.apache.org/jira/browse/HBASE-25954 > Project: HBase > Issue Type: Bug >Reporter: Zezhen Jia >Priority: Critical > > [had...@bigdata-hbase-master003.prod.com bin]$ ./graceful_stop.sh > bigdata-hbase-rs105.prod.com 2021-05-28T11:05:06 Disabling load balancer > SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in > [jar:file:/home/hadoop/cloud/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/cloud/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. SLF4J: Actual binding is of type > [org.slf4j.impl.Log4jLoggerFactory] 2021-05-28T11:05:12 Previous balancer > state was true 2021-05-28T11:05:12 Unloading bigdata-hbase-rs105.prod.com > region(s) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found > binding in > [jar:file:/home/hadoop/cloud/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/cloud/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. SLF4J: Actual binding is of type > [org.slf4j.impl.Log4jLoggerFactory] 2021-05-28 11:05:13,990 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: Client > environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, > built on 03/23/2017 10:13 GMT 2021-05-28 11:05:13,991 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: Client > environment:host.name=bigdata-hbase-master003.prod.com 2021-05-28 > 11:05:13,991 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: Client environment:java.version=1.8.0_241 2021-05-28 > 11:05:13,991 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation > 2021-05-28 11:05:13,991 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: Client > environment:java.home=/home/hadoop/cloud/jdk1.8.0_241/jre 2021-05-28 > 11:05:13,991 INFO > [ReadOnlyZKClient-bigdata-hbase-master003.prod.com:2181,bigdata-hbase-master004.prod.com:2181,bigdata-hbase-rs101.prod.com:2181,bigdata-hbase-rs102.prod.com:2181,bigdata-hbase-rs103.prod.com:2181@0x24aed80c] > zookeeper.ZooKeeper: >
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353650#comment-17353650 ] Baiqiang Zhao commented on HBASE-25861: --- This issue does not need to be changed except UT. If we use Hadoop3.3, this issue may not be necessary(Not test). But with previous versions of Hadoop 3.3, we have to apply this patch. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352305#comment-17352305 ] Baiqiang Zhao edited comment on HBASE-25861 at 5/27/21, 7:19 AM: - Thanks [~weichiu] . So after Hadoop-3.3, addDeprecation() can be called after the Configuration is created? The UT that passed before was because it was a bug before Hadoop-3.3? It seems that only need to modify the UT, let me fix it in HBASE-25928. was (Author: deanz): Thanks [~weichiu] . So after Hadoop3-.3, addDeprecation() can be called after the Configuration is created? The UT that passed before was because it was a bug before Hadoop-3.3? It seems that only need to modify the UT, let me fix it in HBASE-25928. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352305#comment-17352305 ] Baiqiang Zhao commented on HBASE-25861: --- Thanks [~weichiu] . So after Hadoop3-.3, addDeprecation() can be called after the Configuration is created? The UT that passed before was because it was a bug before Hadoop-3.3? It seems that only need to modify the UT, let me fix it in HBASE-25928. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25928) TestHBaseConfiguration#testDeprecatedConfigurations is broken with Hadoop 3.3
[ https://issues.apache.org/jira/browse/HBASE-25928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao reassigned HBASE-25928: - Assignee: Baiqiang Zhao > TestHBaseConfiguration#testDeprecatedConfigurations is broken with Hadoop 3.3 > - > > Key: HBASE-25928 > URL: https://issues.apache.org/jira/browse/HBASE-25928 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Wei-Chiu Chuang >Assignee: Baiqiang Zhao >Priority: Major > > The test TestHBaseConfiguration#testDeprecatedConfigurations was added > recently by HBASE-25861 to address the usage of Hadoop Configuration > addDeprecations API. > However, the API's behavior was changed to fix a bug. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25745) Deprecate/Rename config `hbase.normalizer.min.region.count` to `hbase.normalizer.merge.min.region.count`
[ https://issues.apache.org/jira/browse/HBASE-25745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351469#comment-17351469 ] Baiqiang Zhao commented on HBASE-25745: --- Put up a PR for branch-2. > Deprecate/Rename config `hbase.normalizer.min.region.count` to > `hbase.normalizer.merge.min.region.count` > > > Key: HBASE-25745 > URL: https://issues.apache.org/jira/browse/HBASE-25745 > Project: HBase > Issue Type: Improvement > Components: master, Normalizer >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Nick Dimiduk >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1 > > > After HBASE-24416, {{hbase.normalizer.min.region.count}} only applies to > merge plans. Let's deprecate/rename the configuration key so that it is clear > in what context it applies, and so that it matches the configuration > structure of related keys. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25534) Honor TableDescriptor settings earlier in normalization
[ https://issues.apache.org/jira/browse/HBASE-25534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17350784#comment-17350784 ] Baiqiang Zhao commented on HBASE-25534: --- Put up a PR for branch-2. > Honor TableDescriptor settings earlier in normalization > --- > > Key: HBASE-25534 > URL: https://issues.apache.org/jira/browse/HBASE-25534 > Project: HBase > Issue Type: Improvement > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.2 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.2 > > > -Table can be enabled and disabled merge/split in TableDescriptor, we should > judge before calculating the plan.- > The normalizer configuration can be set by table level. For example, > hbase.normalizer.min.region.count can be set by "alter ‘table’, > CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table > is not set, then use the global configuration which is set in hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25682) Add a new command to update the configuration of all RSs in a RSGroup
[ https://issues.apache.org/jira/browse/HBASE-25682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17349771#comment-17349771 ] Baiqiang Zhao commented on HBASE-25682: --- Thanks [~pankajkumar] ! > Add a new command to update the configuration of all RSs in a RSGroup > - > > Key: HBASE-25682 > URL: https://issues.apache.org/jira/browse/HBASE-25682 > Project: HBase > Issue Type: Improvement > Components: Admin, shell >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > Now we support hot update a subset of configuration on a server or all > server. Sometimes we may be necessary to hot update the configuration > according to a rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348095#comment-17348095 ] Baiqiang Zhao commented on HBASE-25861: --- I tested it locally many times and also did not reproduce this failure. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347293#comment-17347293 ] Baiqiang Zhao commented on HBASE-25861: --- Thanks for the review [~ndimiduk] ! > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341860#comment-17341860 ] Baiqiang Zhao commented on HBASE-25861: --- And ping [~reidchan], [~vjasani]. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341642#comment-17341642 ] Baiqiang Zhao commented on HBASE-25861: --- Put up a PR. Ping [~zhangduo], [~stack], [~ndimiduk], if you have time, can you take a look at this issue? > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17340541#comment-17340541 ] Baiqiang Zhao edited comment on HBASE-25861 at 5/7/21, 6:32 AM: Assign this issue to me temporarily, and I will try to fix it. Everyone is welcome to provide solutions and take away this issue. was (Author: deanz): Assign this issue to me temporarily. Everyone is welcome to provide solutions and take away this issue. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25861) Correct the usage of Configuration#addDeprecation
[ https://issues.apache.org/jira/browse/HBASE-25861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17340541#comment-17340541 ] Baiqiang Zhao commented on HBASE-25861: --- Assign this issue to me temporarily. Everyone is welcome to provide solutions and take away this issue. > Correct the usage of Configuration#addDeprecation > - > > Key: HBASE-25861 > URL: https://issues.apache.org/jira/browse/HBASE-25861 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When I was solving HBASE-25745 > ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of > Configuration#addDeprecation API was wrong. > > At present, we will call Configuration#addDeprecation in the static block for > the deprecated configuration. But after testing, it is found that this does > not complete backward compatibility. When user upgrades HBase and does not > change the deprecated configuration to the new configuration, he will find > that the deprecated configuration does not effect, which may not be > consistent with expectations. The specific test results can be seen in the PR > above, and we can found the calling order of Configuration#addDeprecation is > very important. > > Configuration#addDeprecation is a Hadoop API, looking through the Hadoop > source code, we will find that before creating the Configuration object, the > addDeprecatedKeys() method will be called first: > [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25861) Correct the usage of Configuration#addDeprecation
Baiqiang Zhao created HBASE-25861: - Summary: Correct the usage of Configuration#addDeprecation Key: HBASE-25861 URL: https://issues.apache.org/jira/browse/HBASE-25861 Project: HBase Issue Type: Bug Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao When I was solving HBASE-25745 ([PR3139|https://github.com/apache/hbase/pull/3139]), I found that our use of Configuration#addDeprecation API was wrong. At present, we will call Configuration#addDeprecation in the static block for the deprecated configuration. But after testing, it is found that this does not complete backward compatibility. When user upgrades HBase and does not change the deprecated configuration to the new configuration, he will find that the deprecated configuration does not effect, which may not be consistent with expectations. The specific test results can be seen in the PR above, and we can found the calling order of Configuration#addDeprecation is very important. Configuration#addDeprecation is a Hadoop API, looking through the Hadoop source code, we will find that before creating the Configuration object, the addDeprecatedKeys() method will be called first: [https://github.com/apache/hadoop/blob/b93e448f9aa66689f1ce5059f6cdce8add130457/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java#L34] . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25798) typo in MetricsAssertHelper
[ https://issues.apache.org/jira/browse/HBASE-25798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327049#comment-17327049 ] Baiqiang Zhao commented on HBASE-25798: --- Thanks [~zhangduo] for review! > typo in MetricsAssertHelper > --- > > Key: HBASE-25798 > URL: https://issues.apache.org/jira/browse/HBASE-25798 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3, 2.3.6 > > > "boolean true id counter metric exists." -> "boolean true if counter metric > exists." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25798) typo in MetricsAssertHelper
Baiqiang Zhao created HBASE-25798: - Summary: typo in MetricsAssertHelper Key: HBASE-25798 URL: https://issues.apache.org/jira/browse/HBASE-25798 Project: HBase Issue Type: Improvement Affects Versions: 3.0.0-alpha-1, 2.5.0 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao "boolean true id counter metric exists." -> "boolean true if counter metric exists." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25682) Add a new command to update the configuration of all RSs in a RSGroup
[ https://issues.apache.org/jira/browse/HBASE-25682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317770#comment-17317770 ] Baiqiang Zhao commented on HBASE-25682: --- Thanks [~pankajkumar] for the review. Sir [~stack], do you want to review? > Add a new command to update the configuration of all RSs in a RSGroup > - > > Key: HBASE-25682 > URL: https://issues.apache.org/jira/browse/HBASE-25682 > Project: HBase > Issue Type: Improvement > Components: Admin, shell >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > Now we support hot update a subset of configuration on a server or all > server. Sometimes we may be necessary to hot update the configuration > according to a rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25745) Deprecate/Rename config `hbase.normalizer.min.region.count` to `hbase.normalizer.merge.min.region.count`
[ https://issues.apache.org/jira/browse/HBASE-25745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317752#comment-17317752 ] Baiqiang Zhao commented on HBASE-25745: --- Thanks [~ndimiduk], put up a PR for master branch. By the way, I find that the following log always prints when BLOCKCACHE_BLOCKSIZE_KEY is configured but DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY is not configured in the conf file. {code:java} if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) { LOG.warn("The config key {} is deprecated now, instead please use {}. In future release " + "we will remove the deprecated config.", DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, BLOCKCACHE_BLOCKSIZE_KEY); } {code} I think this is not up to expectations. The reason for this phenomenon is that the following code: {code:java} Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, BLOCKCACHE_BLOCKSIZE_KEY); {code} > Deprecate/Rename config `hbase.normalizer.min.region.count` to > `hbase.normalizer.merge.min.region.count` > > > Key: HBASE-25745 > URL: https://issues.apache.org/jira/browse/HBASE-25745 > Project: HBase > Issue Type: Improvement > Components: master, Normalizer >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Nick Dimiduk >Assignee: Baiqiang Zhao >Priority: Minor > > After HBASE-24416, {{hbase.normalizer.min.region.count}} only applies to > merge plans. Let's deprecate/rename the configuration key so that it is clear > in what context it applies, and so that it matches the configuration > structure of related keys. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25745) Deprecate/Rename config `hbase.normalizer.min.region.count` to `hbase.normalizer.merge.min.region.count`
[ https://issues.apache.org/jira/browse/HBASE-25745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao reassigned HBASE-25745: - Assignee: Baiqiang Zhao > Deprecate/Rename config `hbase.normalizer.min.region.count` to > `hbase.normalizer.merge.min.region.count` > > > Key: HBASE-25745 > URL: https://issues.apache.org/jira/browse/HBASE-25745 > Project: HBase > Issue Type: Improvement > Components: master, Normalizer >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Nick Dimiduk >Assignee: Baiqiang Zhao >Priority: Minor > > After HBASE-24416, {{hbase.normalizer.min.region.count}} only applies to > merge plans. Let's deprecate/rename the configuration key so that it is clear > in what context it applies, and so that it matches the configuration > structure of related keys. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25745) Deprecate/Rename config `hbase.normalizer.min.region.count` to `hbase.normalizer.merge.min.region.count`
[ https://issues.apache.org/jira/browse/HBASE-25745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17316906#comment-17316906 ] Baiqiang Zhao commented on HBASE-25745: --- Hi [~ndimiduk], are you doing this work? If not, can I do it? > Deprecate/Rename config `hbase.normalizer.min.region.count` to > `hbase.normalizer.merge.min.region.count` > > > Key: HBASE-25745 > URL: https://issues.apache.org/jira/browse/HBASE-25745 > Project: HBase > Issue Type: Improvement > Components: master, Normalizer >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Nick Dimiduk >Priority: Minor > > After HBASE-24416, {{hbase.normalizer.min.region.count}} only applies to > merge plans. Let's deprecate/rename the configuration key so that it is clear > in what context it applies, and so that it matches the configuration > structure of related keys. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25687) Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1
[ https://issues.apache.org/jira/browse/HBASE-25687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17315309#comment-17315309 ] Baiqiang Zhao commented on HBASE-25687: --- Hi [~stack] , the rebuild for branch-2 seems good now. > Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 > and branch-1 > > > Key: HBASE-25687 > URL: https://issues.apache.org/jira/browse/HBASE-25687 > Project: HBase > Issue Type: Sub-task >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25687) Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1
[ https://issues.apache.org/jira/browse/HBASE-25687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1731#comment-1731 ] Baiqiang Zhao commented on HBASE-25687: --- Thanks [~stack] for review. > Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 > and branch-1 > > > Key: HBASE-25687 > URL: https://issues.apache.org/jira/browse/HBASE-25687 > Project: HBase > Issue Type: Sub-task >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25682) Add a new command to update the configuration of all RSs in a RSGroup
[ https://issues.apache.org/jira/browse/HBASE-25682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311109#comment-17311109 ] Baiqiang Zhao commented on HBASE-25682: --- Thank you [~stack]. My plan is branch-2 and master, is this ok? The PR for branch-2 will be put up soon. > Add a new command to update the configuration of all RSs in a RSGroup > - > > Key: HBASE-25682 > URL: https://issues.apache.org/jira/browse/HBASE-25682 > Project: HBase > Issue Type: Improvement > Components: Admin, shell >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > Now we support hot update a subset of configuration on a server or all > server. Sometimes we may be necessary to hot update the configuration > according to a rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25682) Add a new command to update the configuration of all RSs in a RSGroup
[ https://issues.apache.org/jira/browse/HBASE-25682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25682: -- Affects Version/s: 2.5.0 3.0.0-alpha-1 > Add a new command to update the configuration of all RSs in a RSGroup > - > > Key: HBASE-25682 > URL: https://issues.apache.org/jira/browse/HBASE-25682 > Project: HBase > Issue Type: Improvement > Components: Admin, shell >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > Now we support hot update a subset of configuration on a server or all > server. Sometimes we may be necessary to hot update the configuration > according to a rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25693) NPE getting metrics from standby masters (MetricsMasterWrapperImpl.getMergePlanCount)
[ https://issues.apache.org/jira/browse/HBASE-25693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308297#comment-17308297 ] Baiqiang Zhao commented on HBASE-25693: --- This is duplicate of HBASE-25480. So should HBASE-25480 be closed? > NPE getting metrics from standby masters > (MetricsMasterWrapperImpl.getMergePlanCount) > - > > Key: HBASE-25693 > URL: https://issues.apache.org/jira/browse/HBASE-25693 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.4.2 >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > 2021-03-24 23:26:15,828 ERROR [Timer for 'HBase' metrics system] > impl.MetricsSourceAdapter: Error getting metrics from source Master,sub=Server > java.lang.NullPointerException > at > org.apache.hadoop.hbase.master.MetricsMasterWrapperImpl.getMergePlanCount(MetricsMasterWrapperImpl.java:58) > at > org.apache.hadoop.hbase.master.MetricsMasterSourceImpl.getMetrics(MetricsMasterSourceImpl.java:90) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368) > at java.base/java.util.TimerThread.mainLoop(Timer.java:556) > at java.base/java.util.TimerThread.run(Timer.java:506) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25687) Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1
[ https://issues.apache.org/jira/browse/HBASE-25687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305313#comment-17305313 ] Baiqiang Zhao commented on HBASE-25687: --- Put up a PR for branch-2. branch-1 will wait until HBASE-25677 is backport to branch-1. > Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 > and branch-1 > > > Key: HBASE-25687 > URL: https://issues.apache.org/jira/browse/HBASE-25687 > Project: HBase > Issue Type: Sub-task >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25681) Add a switch for server/table queryMeter
[ https://issues.apache.org/jira/browse/HBASE-25681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305309#comment-17305309 ] Baiqiang Zhao commented on HBASE-25681: --- Thanks for the review and suggestion [~stack] . Create a subtask to do backport. > Add a switch for server/table queryMeter > > > Key: HBASE-25681 > URL: https://issues.apache.org/jira/browse/HBASE-25681 > Project: HBase > Issue Type: Sub-task >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25687) Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1
Baiqiang Zhao created HBASE-25687: - Summary: Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1 Key: HBASE-25687 URL: https://issues.apache.org/jira/browse/HBASE-25687 Project: HBase Issue Type: Sub-task Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25682) Add a new command to update the configuration of all RSs in a RSGroup
Baiqiang Zhao created HBASE-25682: - Summary: Add a new command to update the configuration of all RSs in a RSGroup Key: HBASE-25682 URL: https://issues.apache.org/jira/browse/HBASE-25682 Project: HBase Issue Type: Improvement Components: Admin, shell Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Now we support hot update a subset of configuration on a server or all server. Sometimes we may be necessary to hot update the configuration according to a rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22975) Add read and write QPS metrics at server level and table level
[ https://issues.apache.org/jira/browse/HBASE-22975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304636#comment-17304636 ] Baiqiang Zhao commented on HBASE-22975: --- Hi [~huaxiangsun], created a subtask to add a switch. > Add read and write QPS metrics at server level and table level > -- > > Key: HBASE-22975 > URL: https://issues.apache.org/jira/browse/HBASE-22975 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 2.2.0, 1.4.10 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1, 1.5.0, 1.4.11, 2.1.7, 2.2.2 > > Attachments: HBASE-22975.branch-2.0001.patch, readQPS.png, > writeQPS.png > > > Use HBase‘s existing class DropwizardMeter to collect read and write QPS. The > collected location is the same as metrics readRequestsCount and > writeRequestsCount. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25681) Add a switch for server/table queryMeter
Baiqiang Zhao created HBASE-25681: - Summary: Add a switch for server/table queryMeter Key: HBASE-25681 URL: https://issues.apache.org/jira/browse/HBASE-25681 Project: HBase Issue Type: Sub-task Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25597) Add row info in Exception when cell size exceeds maxCellSize
[ https://issues.apache.org/jira/browse/HBASE-25597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302166#comment-17302166 ] Baiqiang Zhao commented on HBASE-25597: --- Thank you [~stack] ! > Add row info in Exception when cell size exceeds maxCellSize > > > Key: HBASE-25597 > URL: https://issues.apache.org/jira/browse/HBASE-25597 > Project: HBase > Issue Type: Improvement >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 > > > When cell size exceeds maxCellSize(default is 10M), client will get a > DoNotRetryIOException, code as below: > {code:java} > private void checkCellSizeLimit(final HRegion r, final Mutation m) throws > IOException { > if (r.maxCellSize > 0) { > CellScanner cells = m.cellScanner(); > while (cells.advance()) { > int size = PrivateCellUtil.estimatedSerializedSizeOf(cells.current()); > if (size > r.maxCellSize) { > String msg = "Cell with size " + size + " exceeds limit of " + > r.maxCellSize + " bytes"; > LOG.debug(msg); > throw new DoNotRetryIOException(msg); > } > } > } > } > {code} > There is no row related information, which makes troubleshooting difficult. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
[ https://issues.apache.org/jira/browse/HBASE-25655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300136#comment-17300136 ] Baiqiang Zhao commented on HBASE-25655: --- Hi [~anoop.hbase], I have put up a PR. Can you have a look if you have time? If you have any doubts, please let me know, thanks. > Add a new option in PE to indicate the current number of rows in the test > table > --- > > Key: HBASE-25655 > URL: https://issues.apache.org/jira/browse/HBASE-25655 > Project: HBase > Issue Type: Improvement > Components: PE >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When we have written 10 rows in TestTable with 10 preSplits. Then we want > to test asyncRandomRead, with 10 threads, per thread read 1000 rows. But the > range of all read keys is in [0, 1], all in the first region. It may > cause hotspot problem, and the result is not accurate. > We can use --size and --rows in randomRead and randomSeekScan at the same > time, so that the range of reading reaches the entire data set. But > asyncRandomRead, scanRangeXX, sequentialRead and all the write still have hot > spot problem. > This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
[ https://issues.apache.org/jira/browse/HBASE-25655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25655: -- Description: When we have written 10 rows in TestTable with 10 preSplits. Then we want to test asyncRandomRead, with 10 threads, per thread read 1000 rows. But the range of all read keys is in [0, 1], all in the first region. It may cause hotspot problem, and the result is not accurate. We can use --size and --rows in randomRead and randomSeekScan at the same time, so that the range of reading reaches the entire data set. But asyncRandomRead, scanRangeXX, sequentialRead and all the write still have hot spot problem. This issue add a new option "initRows" to solve this problem. was: When we have written 10 rows in TestTable with 10 preSplits. Then we want to test randomRead with 10 threads, per thread read 1000 rows. But the range of all read keys is in [0, 1], all in the first region. It may cause hotspot problem, and the result is not accurate. This issue add a new option "initRows" to solve this problem. > Add a new option in PE to indicate the current number of rows in the test > table > --- > > Key: HBASE-25655 > URL: https://issues.apache.org/jira/browse/HBASE-25655 > Project: HBase > Issue Type: Improvement > Components: PE >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When we have written 10 rows in TestTable with 10 preSplits. Then we want > to test asyncRandomRead, with 10 threads, per thread read 1000 rows. But the > range of all read keys is in [0, 1], all in the first region. It may > cause hotspot problem, and the result is not accurate. > We can use --size and --rows in randomRead and randomSeekScan at the same > time, so that the range of reading reaches the entire data set. But > asyncRandomRead, scanRangeXX, sequentialRead and all the write still have hot > spot problem. > This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
[ https://issues.apache.org/jira/browse/HBASE-25655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298631#comment-17298631 ] Baiqiang Zhao commented on HBASE-25655: --- After adding the initRows option, rows and size can be completely mutually exclusive. Only one of rows and size can be selected to control the perClientRunRows. > Add a new option in PE to indicate the current number of rows in the test > table > --- > > Key: HBASE-25655 > URL: https://issues.apache.org/jira/browse/HBASE-25655 > Project: HBase > Issue Type: Improvement > Components: PE >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When we have written 10 rows in TestTable with 10 preSplits. Then we want > to test randomRead with 10 threads, per thread read 1000 rows. But the range > of all read keys is in [0, 1], all in the first region. It may cause > hotspot problem, and the result is not accurate. > This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
[ https://issues.apache.org/jira/browse/HBASE-25655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298627#comment-17298627 ] Baiqiang Zhao edited comment on HBASE-25655 at 3/10/21, 8:19 AM: - Thanks [~anoop.hbase]. Yes, use --size can not hot spot one or few regions for randomRead and randomSeekScan. My example may be a bit special. But what about asyncRandomRead, scanRangeXX etc. was (Author: deanz): Thanks [~anoop.hbase]. Yes, use --size can not hot spot one or few regions for randomRead and randomSeekScan. My example may be a bit special. But what about asyncRandomRead, append, checkAndPut, scanRangeXX etc. > Add a new option in PE to indicate the current number of rows in the test > table > --- > > Key: HBASE-25655 > URL: https://issues.apache.org/jira/browse/HBASE-25655 > Project: HBase > Issue Type: Improvement > Components: PE >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When we have written 10 rows in TestTable with 10 preSplits. Then we want > to test randomRead with 10 threads, per thread read 1000 rows. But the range > of all read keys is in [0, 1], all in the first region. It may cause > hotspot problem, and the result is not accurate. > This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
[ https://issues.apache.org/jira/browse/HBASE-25655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298627#comment-17298627 ] Baiqiang Zhao commented on HBASE-25655: --- Thanks [~anoop.hbase]. Yes, use --size can not hot spot one or few regions for randomRead and randomSeekScan. My example may be a bit special. But what about asyncRandomRead, append, checkAndPut, scanRangeXX etc. > Add a new option in PE to indicate the current number of rows in the test > table > --- > > Key: HBASE-25655 > URL: https://issues.apache.org/jira/browse/HBASE-25655 > Project: HBase > Issue Type: Improvement > Components: PE >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When we have written 10 rows in TestTable with 10 preSplits. Then we want > to test randomRead with 10 threads, per thread read 1000 rows. But the range > of all read keys is in [0, 1], all in the first region. It may cause > hotspot problem, and the result is not accurate. > This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25655) Add a new option in PE to indicate the current number of rows in the test table
Baiqiang Zhao created HBASE-25655: - Summary: Add a new option in PE to indicate the current number of rows in the test table Key: HBASE-25655 URL: https://issues.apache.org/jira/browse/HBASE-25655 Project: HBase Issue Type: Improvement Components: PE Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao When we have written 10 rows in TestTable with 10 preSplits. Then we want to test randomRead with 10 threads, per thread read 1000 rows. But the range of all read keys is in [0, 1], all in the first region. It may cause hotspot problem, and the result is not accurate. This issue add a new option "initRows" to solve this problem. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25597) Add row info in Exception when cell size exceeds maxCellSize
Baiqiang Zhao created HBASE-25597: - Summary: Add row info in Exception when cell size exceeds maxCellSize Key: HBASE-25597 URL: https://issues.apache.org/jira/browse/HBASE-25597 Project: HBase Issue Type: Improvement Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao When cell size exceeds maxCellSize(default is 10M), client will get a DoNotRetryIOException, code as below: {code:java} private void checkCellSizeLimit(final HRegion r, final Mutation m) throws IOException { if (r.maxCellSize > 0) { CellScanner cells = m.cellScanner(); while (cells.advance()) { int size = PrivateCellUtil.estimatedSerializedSizeOf(cells.current()); if (size > r.maxCellSize) { String msg = "Cell with size " + size + " exceeds limit of " + r.maxCellSize + " bytes"; LOG.debug(msg); throw new DoNotRetryIOException(msg); } } } } {code} There is no row related information, which makes troubleshooting difficult. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25519) BLOCKSIZE needs to support pretty print
[ https://issues.apache.org/jira/browse/HBASE-25519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17281833#comment-17281833 ] Baiqiang Zhao commented on HBASE-25519: --- Thanks for review [~stack]. PR for branch-2 has been put up. > BLOCKSIZE needs to support pretty print > --- > > Key: HBASE-25519 > URL: https://issues.apache.org/jira/browse/HBASE-25519 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > > HBASE-25439 added a new unit in PrettyPrint.Unit, the BLOCKSIZE in CF should > also support this feature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25554) NPE when init RegionMover
[ https://issues.apache.org/jira/browse/HBASE-25554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279430#comment-17279430 ] Baiqiang Zhao commented on HBASE-25554: --- Ping [~vjasani] > NPE when init RegionMover > - > > Key: HBASE-25554 > URL: https://issues.apache.org/jira/browse/HBASE-25554 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When use graceful_stop.sh or command "${HBASE_HOME}/bin/hbase > org.apache.hadoop.hbase.util.RegionMover -h", we can get NPE: > {code:java} > Exception in thread "main" java.lang.NullPointerExceptionException in thread > "main" java.lang.NullPointerException at > org.apache.hadoop.hbase.master.RackManager.(RackManager.java:46) at > org.apache.hadoop.hbase.util.RegionMover.(RegionMover.java:128) at > org.apache.hadoop.hbase.util.RegionMover.main(RegionMover.java:792) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25554) NPE when init RegionMover
[ https://issues.apache.org/jira/browse/HBASE-25554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279461#comment-17279461 ] Baiqiang Zhao commented on HBASE-25554: --- Thanks for the super quick review [~vjasani] > NPE when init RegionMover > - > > Key: HBASE-25554 > URL: https://issues.apache.org/jira/browse/HBASE-25554 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When use graceful_stop.sh or command "${HBASE_HOME}/bin/hbase > org.apache.hadoop.hbase.util.RegionMover -h", we can get NPE: > {code:java} > Exception in thread "main" java.lang.NullPointerExceptionException in thread > "main" java.lang.NullPointerException at > org.apache.hadoop.hbase.master.RackManager.(RackManager.java:46) at > org.apache.hadoop.hbase.util.RegionMover.(RegionMover.java:128) at > org.apache.hadoop.hbase.util.RegionMover.main(RegionMover.java:792) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25554) NPE when init RegionMover
[ https://issues.apache.org/jira/browse/HBASE-25554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25554: -- Summary: NPE when init RegionMover (was: NPE when RegionMover) > NPE when init RegionMover > - > > Key: HBASE-25554 > URL: https://issues.apache.org/jira/browse/HBASE-25554 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > When use graceful_stop.sh or command "${HBASE_HOME}/bin/hbase > org.apache.hadoop.hbase.util.RegionMover -h", we can get NPE: > {code:java} > Exception in thread "main" java.lang.NullPointerExceptionException in thread > "main" java.lang.NullPointerException at > org.apache.hadoop.hbase.master.RackManager.(RackManager.java:46) at > org.apache.hadoop.hbase.util.RegionMover.(RegionMover.java:128) at > org.apache.hadoop.hbase.util.RegionMover.main(RegionMover.java:792) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25554) NPE when RegionMover
Baiqiang Zhao created HBASE-25554: - Summary: NPE when RegionMover Key: HBASE-25554 URL: https://issues.apache.org/jira/browse/HBASE-25554 Project: HBase Issue Type: Bug Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao When use graceful_stop.sh or command "${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.util.RegionMover -h", we can get NPE: {code:java} Exception in thread "main" java.lang.NullPointerExceptionException in thread "main" java.lang.NullPointerException at org.apache.hadoop.hbase.master.RackManager.(RackManager.java:46) at org.apache.hadoop.hbase.util.RegionMover.(RegionMover.java:128) at org.apache.hadoop.hbase.util.RegionMover.main(RegionMover.java:792) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25534) Improve the configuration of Normalizer
[ https://issues.apache.org/jira/browse/HBASE-25534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25534: -- Description: -Table can be enabled and disabled merge/split in TableDescriptor, we should judge before calculating the plan.- The normalizer configuration can be set by table level. For example, hbase.normalizer.min.region.count can be set by "alter ‘table’, CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table is not set, then use the global configuration which is set in hbase-site.xml. was: Table can be enabled and disabled merge/split in TableDescriptor, we should judge before calculating the plan. Part of the configuration can be set by table level. For example, hbase.normalizer.min.region.count can be set by "alter ‘table’, CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table is not set, then use the global configuration which is set in hbase-site.xml. > Improve the configuration of Normalizer > > > Key: HBASE-25534 > URL: https://issues.apache.org/jira/browse/HBASE-25534 > Project: HBase > Issue Type: Improvement > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.2 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > -Table can be enabled and disabled merge/split in TableDescriptor, we should > judge before calculating the plan.- > The normalizer configuration can be set by table level. For example, > hbase.normalizer.min.region.count can be set by "alter ‘table’, > CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table > is not set, then use the global configuration which is set in hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25534) Improve the configuration of Normalizer
[ https://issues.apache.org/jira/browse/HBASE-25534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276262#comment-17276262 ] Baiqiang Zhao commented on HBASE-25534: --- Hi [~ndimiduk], can you have a look if you have time? > Improve the configuration of Normalizer > > > Key: HBASE-25534 > URL: https://issues.apache.org/jira/browse/HBASE-25534 > Project: HBase > Issue Type: Improvement > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.2 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > Table can be enabled and disabled merge/split in TableDescriptor, we should > judge before calculating the plan. > Part of the configuration can be set by table level. For example, > hbase.normalizer.min.region.count can be set by "alter ‘table’, > CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table > is not set, then use the global configuration which is set in hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25533) The metadata of the table and family should not be an empty string
[ https://issues.apache.org/jira/browse/HBASE-25533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17274109#comment-17274109 ] Baiqiang Zhao commented on HBASE-25533: --- Will raise RPs for other branches, thanks [~vjasani] > The metadata of the table and family should not be an empty string > --- > > Key: HBASE-25533 > URL: https://issues.apache.org/jira/browse/HBASE-25533 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > If the metadata of the table is set to null, the metadata will be removed. > The code is: > [https://github.com/apache/hbase/blob/b07549febb462b072792659051c64bb54d122771/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java#L721] > But if set metadata as empty string, serious errors may occur. Some metadata > is number, > it will throw a NumberFormatException when converting empty string to a > number. If the exception is thrown when the region is initialized, all > regions of the table will be in RIT. > The following command can reproduced this issue. *Note: Please execute in the > test environment.* > {code:java} > alter 'test_table', CONFIGURATION => > {'hbase.rs.cachecompactedblocksonwrite.threshold' => ''} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25534) Improve the configuration of Normalizer
Baiqiang Zhao created HBASE-25534: - Summary: Improve the configuration of Normalizer Key: HBASE-25534 URL: https://issues.apache.org/jira/browse/HBASE-25534 Project: HBase Issue Type: Improvement Components: Normalizer Affects Versions: 3.0.0-alpha-1, 2.4.2 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Table can be enabled and disabled merge/split in TableDescriptor, we should judge before calculating the plan. Part of the configuration can be set by table level. For example, hbase.normalizer.min.region.count can be set by "alter ‘table’, CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table is not set, then use the global configuration which is set in hbase-site.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25533) The metadata of the table and family should not be an empty string
[ https://issues.apache.org/jira/browse/HBASE-25533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272535#comment-17272535 ] Baiqiang Zhao commented on HBASE-25533: --- My current idea is to remove the metadata if it is set to an empty string, just like null. Because empty string is meaningless. And maybe HBCK should support to repair this extreme situation. Ping [~stack] [~zhangduo] [~vjasani] > The metadata of the table and family should not be an empty string > --- > > Key: HBASE-25533 > URL: https://issues.apache.org/jira/browse/HBASE-25533 > Project: HBase > Issue Type: Bug >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > If the metadata of the table is set to null, the metadata will be removed. > The code is: > [https://github.com/apache/hbase/blob/b07549febb462b072792659051c64bb54d122771/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java#L721] > But if set metadata as empty string, serious errors may occur. Some metadata > is number, > it will throw a NumberFormatException when converting empty string to a > number. If the exception is thrown when the region is initialized, all > regions of the table will be in RIT. > The following command can reproduced this issue. *Note: Please execute in the > test environment.* > {code:java} > alter 'test_table', CONFIGURATION => > {'hbase.rs.cachecompactedblocksonwrite.threshold' => ''} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25533) The metadata of the table and family should not be an empty string
Baiqiang Zhao created HBASE-25533: - Summary: The metadata of the table and family should not be an empty string Key: HBASE-25533 URL: https://issues.apache.org/jira/browse/HBASE-25533 Project: HBase Issue Type: Bug Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao If the metadata of the table is set to null, the metadata will be removed. The code is: [https://github.com/apache/hbase/blob/b07549febb462b072792659051c64bb54d122771/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java#L721] But if set metadata as empty string, serious errors may occur. Some metadata is number, it will throw a NumberFormatException when converting empty string to a number. If the exception is thrown when the region is initialized, all regions of the table will be in RIT. The following command can reproduced this issue. *Note: Please execute in the test environment.* {code:java} alter 'test_table', CONFIGURATION => {'hbase.rs.cachecompactedblocksonwrite.threshold' => ''} {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25513) When the table is turned on normalize, the first region may not be merged even the size is 0
[ https://issues.apache.org/jira/browse/HBASE-25513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269226#comment-17269226 ] Baiqiang Zhao commented on HBASE-25513: --- Thanks [~ndimiduk]. Normalizer is a nice feature, and we backport this feature to our production cluster. And in my view, there may be two points that can be optimized: 1. Table can be enabled and disabled merge/split in TableDescriptor, we should judge before calculating the plan 2. Part of the configuration can be set by table level. For example, hbase.normalizer.min.region.count can be set by "alter ‘table’, CONFIGURATION=>\{'hbase.normalizer.min.region.count' => '5'}". If the table is not set, then use the global configuration which is set in hbase-site.xml. How do you feel? > When the table is turned on normalize, the first region may not be merged > even the size is 0 > > > Key: HBASE-25513 > URL: https://issues.apache.org/jira/browse/HBASE-25513 > Project: HBase > Issue Type: Bug > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.1 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.2 > > > Suppose a table has 8 regions, the sizes are [0, 10, 1, 0, 9, 0, 12, 0], the > average region size is 4, and split is disabled. > The current Normalizer can only get three merge plans (use size to represent > region): > [1, 0], [9, 0],[12, 0] > It can not merge the first region, even it's size is 0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25522) Remove deprecated methods in ReplicationPeerConfig
Baiqiang Zhao created HBASE-25522: - Summary: Remove deprecated methods in ReplicationPeerConfig Key: HBASE-25522 URL: https://issues.apache.org/jira/browse/HBASE-25522 Project: HBase Issue Type: Sub-task Affects Versions: 3.0.0-alpha-1 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Fix For: 3.0.0-alpha-1 HBASE-19576 introduce ReplicationPeerConfigBuilder to set the value of ReplicationPeerConfig. The deprecated methods in ReplicationPeerConfig should be removed in 3.x. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25519) BLOCKSIZE needs to support pretty print
[ https://issues.apache.org/jira/browse/HBASE-25519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17267811#comment-17267811 ] Baiqiang Zhao commented on HBASE-25519: --- Ping [~stack] > BLOCKSIZE needs to support pretty print > --- > > Key: HBASE-25519 > URL: https://issues.apache.org/jira/browse/HBASE-25519 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > HBASE-25439 added a new unit in PrettyPrint.Unit, the BLOCKSIZE in CF should > also support this feature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25519) BLOCKSIZE needs to support pretty print
Baiqiang Zhao created HBASE-25519: - Summary: BLOCKSIZE needs to support pretty print Key: HBASE-25519 URL: https://issues.apache.org/jira/browse/HBASE-25519 Project: HBase Issue Type: Improvement Affects Versions: 3.0.0-alpha-1, 2.5.0 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao HBASE-25439 added a new unit in PrettyPrint.Unit, the BLOCKSIZE in CF should also support this feature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25513) When the table is turned on normalize, the first region may not be merged even the size is 0
Baiqiang Zhao created HBASE-25513: - Summary: When the table is turned on normalize, the first region may not be merged even the size is 0 Key: HBASE-25513 URL: https://issues.apache.org/jira/browse/HBASE-25513 Project: HBase Issue Type: Bug Components: Normalizer Affects Versions: 2.4.1, 3.0.0-alpha-1 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Suppose a table has 8 regions, the sizes are [0, 10, 1, 0, 9, 0, 12, 0], the average region size is 4, and split is disabled. The current Normalizer can only get three merge plans (use size to represent region): [1, 0], [9, 0],[12, 0] It can not merge the first region, even it's size is 0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25482) Improve SimpleRegionNormalizer#getAverageRegionSizeMb
[ https://issues.apache.org/jira/browse/HBASE-25482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17262493#comment-17262493 ] Baiqiang Zhao commented on HBASE-25482: --- Ping [~ndimiduk]. I found NormalizerTargetRegionCount and NormalizerTargetRegionSize can only be used when log level is debug. Can you review this? > Improve SimpleRegionNormalizer#getAverageRegionSizeMb > - > > Key: HBASE-25482 > URL: https://issues.apache.org/jira/browse/HBASE-25482 > Project: HBase > Issue Type: Improvement > Components: Normalizer >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > > If the table is set NormalizerTargetRegionSize, we take > NormalizerTargetRegionSize as avgRegionSize and return it. So the totalSizeMb > of table is not always calculated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
[ https://issues.apache.org/jira/browse/HBASE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17262492#comment-17262492 ] Baiqiang Zhao commented on HBASE-25439: --- Ping [~stack]. > Add BYTE unit in PrettyPrinter.Unit > > > Key: HBASE-25439 > URL: https://issues.apache.org/jira/browse/HBASE-25439 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: image-2020-12-23-16-12-42-210.png > > > Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human > readable: > !image-2020-12-23-16-12-42-210.png! > This issue add a new unit 'BYTE' to pretty print value of size, such as > MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25482) Improve SimpleRegionNormalizer#getAverageRegionSizeMb
Baiqiang Zhao created HBASE-25482: - Summary: Improve SimpleRegionNormalizer#getAverageRegionSizeMb Key: HBASE-25482 URL: https://issues.apache.org/jira/browse/HBASE-25482 Project: HBase Issue Type: Improvement Components: Normalizer Affects Versions: 2.4.0, 3.0.0-alpha-1 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao If the table is set NormalizerTargetRegionSize, we take NormalizerTargetRegionSize as avgRegionSize and return it. So the totalSizeMb of table is not always calculated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
[ https://issues.apache.org/jira/browse/HBASE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258088#comment-17258088 ] Baiqiang Zhao commented on HBASE-25439: --- When backport to branch-2, I found MEMSTORE_FLUSHSIZE is missing in TableDescriptorBuilder.getUnit method. PR#2841 is addendum, please review [~stack] > Add BYTE unit in PrettyPrinter.Unit > > > Key: HBASE-25439 > URL: https://issues.apache.org/jira/browse/HBASE-25439 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: image-2020-12-23-16-12-42-210.png > > > Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human > readable: > !image-2020-12-23-16-12-42-210.png! > This issue add a new unit 'BYTE' to pretty print value of size, such as > MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
[ https://issues.apache.org/jira/browse/HBASE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17257913#comment-17257913 ] Baiqiang Zhao commented on HBASE-25439: --- Thanks for review [~stack]. Will give a PR for branch-2. And create a new issue for pretty print BlockSize > Add BYTE unit in PrettyPrinter.Unit > > > Key: HBASE-25439 > URL: https://issues.apache.org/jira/browse/HBASE-25439 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: image-2020-12-23-16-12-42-210.png > > > Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human > readable: > !image-2020-12-23-16-12-42-210.png! > This issue add a new unit 'BYTE' to pretty print value of size, such as > MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25431) MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number
[ https://issues.apache.org/jira/browse/HBASE-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25431: -- Description: Before create or alter table, we will do sanityCheck in TableDescriptorChecker. If MAX_FILESIZE or MEMSTORE_FLUSHSIZE < 0, use maxFileSizeLowerLimit or flushSizeLowerLimit instead to pass check. But the real value in TableDescriptor is still < 0, and we can see negative values on the UI. However in flush and split logic, MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0 , if true, change to default value. This does not affect flush and split. !image-2020-12-22-11-46-38-967.png! was: MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0 in flush and split logic, if true, change to default value. We should check this in TableDescriptorChecker !image-2020-12-22-11-46-38-967.png! > MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number > - > > Key: HBASE-25431 > URL: https://issues.apache.org/jira/browse/HBASE-25431 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Attachments: image-2020-12-22-11-46-38-967.png > > > Before create or alter table, we will do sanityCheck in > TableDescriptorChecker. If MAX_FILESIZE or MEMSTORE_FLUSHSIZE < 0, use > maxFileSizeLowerLimit or flushSizeLowerLimit instead to pass check. But the > real value in TableDescriptor is still < 0, and we can see negative values > on the UI. > However in flush and split logic, MAX_FILESIZE and MEMSTORE_FLUSHSIZE will > judge whether it's value <= 0 , if true, change to default value. This does > not affect flush and split. > !image-2020-12-22-11-46-38-967.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25431) MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number
[ https://issues.apache.org/jira/browse/HBASE-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17257906#comment-17257906 ] Baiqiang Zhao commented on HBASE-25431: --- Yes, the original logic is use maxFileSizeLowerLimit (2MB) instead if < 0. But the real value in TableDescriptor is still < 0, and we can see negative values on the UI. > MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number > - > > Key: HBASE-25431 > URL: https://issues.apache.org/jira/browse/HBASE-25431 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Attachments: image-2020-12-22-11-46-38-967.png > > > Before create or alter table, we will do sanityCheck in > TableDescriptorChecker. If MAX_FILESIZE or MEMSTORE_FLUSHSIZE < 0, use > maxFileSizeLowerLimit or flushSizeLowerLimit instead to pass check. But the > real value in TableDescriptor is still < 0, and we can see negative values > on the UI. > However in flush and split logic, MAX_FILESIZE and MEMSTORE_FLUSHSIZE will > judge whether it's value <= 0 , if true, change to default value. This does > not affect flush and split. > !image-2020-12-22-11-46-38-967.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
[ https://issues.apache.org/jira/browse/HBASE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25439: -- Affects Version/s: 3.0.0-alpha-1 > Add BYTE unit in PrettyPrinter.Unit > > > Key: HBASE-25439 > URL: https://issues.apache.org/jira/browse/HBASE-25439 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Attachments: image-2020-12-23-16-12-42-210.png > > > Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human > readable: > !image-2020-12-23-16-12-42-210.png! > This issue add a new unit 'BYTE' to pretty print value of size, such as > MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
[ https://issues.apache.org/jira/browse/HBASE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25439: -- Description: Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human readable: !image-2020-12-23-16-12-42-210.png! This issue add a new unit 'BYTE' to pretty print value of size, such as MAX_FILESIZE, MEMSTORE_FLUSHSIZE. was: Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human readable: !image-2020-12-23-16-12-42-210.png! This issue add a new unit 'BYTE' to print value of size, such as MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > Add BYTE unit in PrettyPrinter.Unit > > > Key: HBASE-25439 > URL: https://issues.apache.org/jira/browse/HBASE-25439 > Project: HBase > Issue Type: Improvement >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Attachments: image-2020-12-23-16-12-42-210.png > > > Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human > readable: > !image-2020-12-23-16-12-42-210.png! > This issue add a new unit 'BYTE' to pretty print value of size, such as > MAX_FILESIZE, MEMSTORE_FLUSHSIZE. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25439) Add BYTE unit in PrettyPrinter.Unit
Baiqiang Zhao created HBASE-25439: - Summary: Add BYTE unit in PrettyPrinter.Unit Key: HBASE-25439 URL: https://issues.apache.org/jira/browse/HBASE-25439 Project: HBase Issue Type: Improvement Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Attachments: image-2020-12-23-16-12-42-210.png Currently only TTL supports PrettyPrinter, and MAX_FILESIZE is not human readable: !image-2020-12-23-16-12-42-210.png! This issue add a new unit 'BYTE' to print value of size, such as MAX_FILESIZE, MEMSTORE_FLUSHSIZE. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25431) MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number
Baiqiang Zhao created HBASE-25431: - Summary: MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number Key: HBASE-25431 URL: https://issues.apache.org/jira/browse/HBASE-25431 Project: HBase Issue Type: Improvement Affects Versions: 2.4.0, 3.0.0-alpha-1 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Attachments: image-2020-12-22-11-46-38-967.png MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0, if true, change to default value. We should check this in TableDescriptorChecker !image-2020-12-22-11-46-38-967.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25431) MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number
[ https://issues.apache.org/jira/browse/HBASE-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-25431: -- Description: MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0 in flush and split logic, if true, change to default value. We should check this in TableDescriptorChecker !image-2020-12-22-11-46-38-967.png! was: MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0, if true, change to default value. We should check this in TableDescriptorChecker !image-2020-12-22-11-46-38-967.png! > MAX_FILESIZE and MEMSTORE_FLUSHSIZE should not be set negative number > - > > Key: HBASE-25431 > URL: https://issues.apache.org/jira/browse/HBASE-25431 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Attachments: image-2020-12-22-11-46-38-967.png > > > MAX_FILESIZE and MEMSTORE_FLUSHSIZE will judge whether it's value <= 0 in > flush and split logic, if true, change to default value. We should check this > in TableDescriptorChecker > !image-2020-12-22-11-46-38-967.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25365) The log in move_servers_rsgroup is incorrect
[ https://issues.apache.org/jira/browse/HBASE-25365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250737#comment-17250737 ] Baiqiang Zhao commented on HBASE-25365: --- Thanks [~stack] > The log in move_servers_rsgroup is incorrect > > > Key: HBASE-25365 > URL: https://issues.apache.org/jira/browse/HBASE-25365 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.1 > > > Assuming that server1 belongs to the default group, execute the command: > > {code:java} > move_servers_rsgroup 'test',['server1:16020'] > {code} > We can see log: > > {code:java} > Moving 10 region(s) to group test, current retry=0 > .. > All regions from server(s) [server1,16020,1607067542905] moved to target > group test. > {code} > The target group should be the source group in log: test -> default. > > https://github.com/apache/hbase/blob/7d0a687e5798a2f4ca3190b409169f7e17a75b34/hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1027 > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25365) The log in move_servers_rsgroup is incorrect
[ https://issues.apache.org/jira/browse/HBASE-25365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250119#comment-17250119 ] Baiqiang Zhao commented on HBASE-25365: --- The PR for branch-2 has been uploaded, please review [~stack] > The log in move_servers_rsgroup is incorrect > > > Key: HBASE-25365 > URL: https://issues.apache.org/jira/browse/HBASE-25365 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Minor > Fix For: 3.0.0-alpha-1 > > > Assuming that server1 belongs to the default group, execute the command: > > {code:java} > move_servers_rsgroup 'test',['server1:16020'] > {code} > We can see log: > > {code:java} > Moving 10 region(s) to group test, current retry=0 > .. > All regions from server(s) [server1,16020,1607067542905] moved to target > group test. > {code} > The target group should be the source group in log: test -> default. > > https://github.com/apache/hbase/blob/7d0a687e5798a2f4ca3190b409169f7e17a75b34/hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1027 > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25365) The log in move_servers_rsgroup is incorrect
Baiqiang Zhao created HBASE-25365: - Summary: The log in move_servers_rsgroup is incorrect Key: HBASE-25365 URL: https://issues.apache.org/jira/browse/HBASE-25365 Project: HBase Issue Type: Bug Affects Versions: 3.0.0-alpha-1, 2.4.0 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Assuming that server1 belongs to the default group, execute the command: {code:java} move_servers_rsgroup 'test',['server1:16020'] {code} We can see log: {code:java} Moving 10 region(s) to group test, current retry=0 .. All regions from server(s) [server1,16020,1607067542905] moved to target group test. {code} The target group should be the source group in log: test -> default. https://github.com/apache/hbase/blob/7d0a687e5798a2f4ca3190b409169f7e17a75b34/hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java#L1027 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25330) RSGroupInfoManagerImpl#moveServers return is not set of servers moved
[ https://issues.apache.org/jira/browse/HBASE-25330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17240434#comment-17240434 ] Baiqiang Zhao commented on HBASE-25330: --- Hi [~zhangduo], PR for other branches have been created. > RSGroupInfoManagerImpl#moveServers return is not set of servers moved > - > > Key: HBASE-25330 > URL: https://issues.apache.org/jira/browse/HBASE-25330 > Project: HBase > Issue Type: Bug > Components: rsgroup >Affects Versions: 3.0.0-alpha-1, 2.4.0, 2.2.6, 2.3.2 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > The method works on move_servers_rsgroup. What the method actually returns is > all the destination rsgroup servers. It should return the moved servers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25330) RSGroupInfoManagerImpl#moveServers return is not set of servers moved
Baiqiang Zhao created HBASE-25330: - Summary: RSGroupInfoManagerImpl#moveServers return is not set of servers moved Key: HBASE-25330 URL: https://issues.apache.org/jira/browse/HBASE-25330 Project: HBase Issue Type: Bug Components: rsgroup Affects Versions: 2.3.2, 2.2.6, 3.0.0-alpha-1, 2.4.0 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao The method works on move_servers_rsgroup. What the method actually returns is all the destination rsgroup servers. It should return the moved servers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25298) hbase.rsgroup.fallback.enable should support dynamic configuration
[ https://issues.apache.org/jira/browse/HBASE-25298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17237788#comment-17237788 ] Baiqiang Zhao commented on HBASE-25298: --- The PR has also been merged to branch-2 [~apurtell]. > hbase.rsgroup.fallback.enable should support dynamic configuration > --- > > Key: HBASE-25298 > URL: https://issues.apache.org/jira/browse/HBASE-25298 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Use update_config command to control the switch of RSGroup fallback. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25306) The log in SimpleLoadBalancer#onConfigurationChange is wrong
Baiqiang Zhao created HBASE-25306: - Summary: The log in SimpleLoadBalancer#onConfigurationChange is wrong Key: HBASE-25306 URL: https://issues.apache.org/jira/browse/HBASE-25306 Project: HBase Issue Type: Bug Affects Versions: 3.0.0-alpha-1, 2.4.0, 2.2.7 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao [https://github.com/apache/hbase/blob/8c1e4763b3e11d4553e5a59e620ab30e3b2047e9/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java#L139] current overallSlop should be "overallSlop" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25298) hbase.rsgroup.fallback.enable should support dynamic configuration
[ https://issues.apache.org/jira/browse/HBASE-25298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233512#comment-17233512 ] Baiqiang Zhao commented on HBASE-25298: --- Ping [~zghao] [~Ddupg] > hbase.rsgroup.fallback.enable should support dynamic configuration > --- > > Key: HBASE-25298 > URL: https://issues.apache.org/jira/browse/HBASE-25298 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > > Use update_config command to control the switch of RSGroup fallback. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25298) hbase.rsgroup.fallback.enable should support dynamic configuration
Baiqiang Zhao created HBASE-25298: - Summary: hbase.rsgroup.fallback.enable should support dynamic configuration Key: HBASE-25298 URL: https://issues.apache.org/jira/browse/HBASE-25298 Project: HBase Issue Type: Improvement Affects Versions: 3.0.0-alpha-1, 2.4.0 Reporter: Baiqiang Zhao Assignee: Baiqiang Zhao Use update_config command to control the switch of RSGroup fallback. -- This message was sent by Atlassian Jira (v8.3.4#803005)