[jira] [Commented] (HBASE-28331) Client integration test fails after upgrading hadoop3 version to 3.3.x

2024-02-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813881#comment-17813881
 ] 

Hudson commented on HBASE-28331:


Results for branch HBASE-28331-jenkins
[build #8 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/HBASE-28331-jenkins/8/]:
 (/) *{color:green}+1 overall{color}*

details (if available):









(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Client integration test fails after upgrading hadoop3 version to 3.3.x
> --
>
> Key: HBASE-28331
> URL: https://issues.apache.org/jira/browse/HBASE-28331
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, jenkins
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-beta-2
>
>
> Saw this error when starting HBase cluster
> {noformat}
> 2024-01-25T11:25:01,838 ERROR 
> [master/jenkins-hbase21:16000:becomeActiveMaster] master.HMaster: Failed to 
> become active master
> java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetSafeModeRequestProto
>  cannot be cast to com.google.protobuf.Message
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:247)
>  ~[hadoop-common-3.3.5.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:132)
>  ~[hadoop-common-3.3.5.jar:?]
>   at com.sun.proxy.$Proxy32.setSafeMode(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:847)
>  ~[hadoop-hdfs-client-3.3.5.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
>  ~[hadoop-common-3.3.5.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
>  ~[hadoop-common-3.3.5.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
>  ~[hadoop-common-3.3.5.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
>  ~[hadoop-common-3.3.5.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
>  ~[hadoop-common-3.3.5.jar:?]
>   at com.sun.proxy.$Proxy33.setSafeMode(Unknown Source) ~[?:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:363) 
> ~[hbase-server-4.0.0-alpha-1-SNAPSHOT.jar:4.0.0-alpha-1-SNAPSHOT]
>   at com.sun.proxy.$Proxy34.setSafeMode(Unknown Source) ~[?:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:363) 
> ~[hbase-server-4.0.0-alpha-1-SNAPSHOT.jar:4.0.0-alpha-1-SNAPSHOT]
>   at com.sun.proxy.$Proxy34.setSafeMode(Unknown Source) ~[?:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_362]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:363) 
> ~[hbase-server-4.0.0-alpha-1-SNAPSHOT.jar:4.0.0-alpha-1-SNAPSHOT]
>   at com.sun.proxy.$Proxy34.setSafeMode(Unknown Source) ~[?:?]
>   at s

[jira] [Commented] (HBASE-14775) Replication can't authenticate with peer Zookeeper with different server principal

2024-02-02 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813880#comment-17813880
 ] 

Duo Zhang commented on HBASE-14775:
---

Is this still a problem since ZOOKEEPER-2139 is done?

> Replication can't authenticate with peer Zookeeper with different server 
> principal
> --
>
> Key: HBASE-14775
> URL: https://issues.apache.org/jira/browse/HBASE-14775
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Reporter: Gary Helmling
>Priority: Major
>
> When replication is setup with security, where the local ZK cluster and peer 
> ZK cluster use different server principals, the source HBase cluster is 
> unable to authenticate with the peer ZK cluster.
> When ZK is configured for SASL authentication and a server principal other 
> than the default ("zookeeper") is used, the correct server principal must be 
> specified on the client as a system property -- the confusingly named 
> {{zookeeper.sasl.client.username}}.  However, since this is given as a system 
> property, authentication with the peer cluster breaks when it uses a 
> different ZK server principal than the local cluster.
> We need a way of tying this setting to the replication peer config and then 
> setting the property when the peer's ZooKeeperWatcher is created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28339) HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to reconnect

2024-02-02 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813879#comment-17813879
 ] 

Duo Zhang commented on HBASE-28339:
---

Thanks [~andor]!

> HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to 
> reconnect
> --
>
> Key: HBASE-28339
> URL: https://issues.apache.org/jira/browse/HBASE-28339
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 2.4.17, 3.0.0-beta-1, 2.5.7, 2.7.0
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Asbtract base class {{HBaseReplicationEndpoint}} and therefore 
> {{HBaseInterClusterReplicationEndpoint}} creates new ZooKeeper client 
> instance every time there's an error occurs in communication and it tries to 
> reconnect. This was not a problem with ZooKeeper 3.4.x versions, because the 
> TGT Login thread was a static reference and only created once for all clients 
> in the same JVM. With the upgrade to ZooKeeper 3.5.x the login thread is 
> dedicated to the client instance, hence we have a new login thread every time 
> the replication endpoint reconnects.
> {code:java}
> /**
>  * A private method used to re-establish a zookeeper session with a peer 
> cluster.
>  */
> protected void reconnect(KeeperException ke) {
>   if (
> ke instanceof ConnectionLossException || ke instanceof 
> SessionExpiredException
>   || ke instanceof AuthFailedException
>   ) {
> String clusterKey = ctx.getPeerConfig().getClusterKey();
> LOG.warn("Lost the ZooKeeper connection for peer " + clusterKey, ke);
> try {
>   reloadZkWatcher();
> } catch (IOException io) {
>   LOG.warn("Creation of ZookeeperWatcher failed for peer " + clusterKey, 
> io);
> }
>   }
> }{code}
> {code:java}
> /**
>  * Closes the current ZKW (if not null) and creates a new one
>  * @throws IOException If anything goes wrong connecting
>  */
> synchronized void reloadZkWatcher() throws IOException {
>   if (zkw != null) zkw.close();
>   zkw = new ZKWatcher(ctx.getConfiguration(), "connection to cluster: " + 
> ctx.getPeerId(), this);
>   getZkw().registerListener(new PeerRegionServerListener(this));
> } {code}
> If the target cluster of replication is unavailable for some reason, the 
> replication endpoint keeps trying to reconnect to ZooKeeper destroying and 
> creating new Login threads constantly which will carpet bomb the KDC host 
> with login requests.
>  
> I'm not sure how to fix this yet, trying to create a unit test first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on PR #5631:
URL: https://github.com/apache/hbase/pull/5631#issuecomment-1925201778

   Thanks for the review. I could see if I can split the 
ConnectionRegistryRpcStubHolder too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477009693


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,
+RpcControllerFactory rpcControllerFactory, Set bootstrapNodes) 
{
+this.conf = conf;
+if (User.isHBaseSecurityEnabled(conf)) {
+  this.noAuthConf = new Configuration(conf);
+  this.noAuthConf.set(User.HBASE_SECURITY_CONF_KEY, "simple");
+} else {
+  this.noAuthConf = conf;
+}
+this.user = user;
+this.rpcControllerFactory = rpcControllerFactory;
+this.bootstrapNodes = Collections.unmodifiableSet(bootstrapNodes);
+this.rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE,
+  conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
+  }
+
+  private ImmutableMap 
createStubs(RpcClient rpcClient,
+Collection addrs) {
+LOG.debug("Going to use new servers to create stubs: {}", addrs);
+Preconditions.checkNotNull(addrs);
+ImmutableMap.Builder builder =
+  ImmutableMap.builderWithExpectedSize(addrs.size());
+for (ServerName masterAddr : addrs) {
+  builder.put(masterAddr,
+ClientMetaService.newStub(rpcClient.createRpcChannel(masterAddr, user, 
rpcTimeoutMs)));
+}
+return builder.build();
+  }
+
+  private void createStubsAndComplete(String clusterId,
+CompletableFuture> 
future) {
+RpcClient c = RpcClientFactory.createClient(conf, clusterId);
+ImmutableMap m = createStubs(c, 
bootstrapNodes);
+rpcClient = c;
+addr2Stub = m;
+futu

Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477007924


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,
+RpcControllerFactory rpcControllerFactory, Set bootstrapNodes) 
{
+this.conf = conf;
+if (User.isHBaseSecurityEnabled(conf)) {
+  this.noAuthConf = new Configuration(conf);
+  this.noAuthConf.set(User.HBASE_SECURITY_CONF_KEY, "simple");
+} else {
+  this.noAuthConf = conf;
+}
+this.user = user;
+this.rpcControllerFactory = rpcControllerFactory;
+this.bootstrapNodes = Collections.unmodifiableSet(bootstrapNodes);
+this.rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE,
+  conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
+  }
+
+  private ImmutableMap 
createStubs(RpcClient rpcClient,
+Collection addrs) {
+LOG.debug("Going to use new servers to create stubs: {}", addrs);
+Preconditions.checkNotNull(addrs);
+ImmutableMap.Builder builder =
+  ImmutableMap.builderWithExpectedSize(addrs.size());
+for (ServerName masterAddr : addrs) {
+  builder.put(masterAddr,
+ClientMetaService.newStub(rpcClient.createRpcChannel(masterAddr, user, 
rpcTimeoutMs)));
+}
+return builder.build();
+  }
+
+  private void createStubsAndComplete(String clusterId,
+CompletableFuture> 
future) {
+RpcClient c = RpcClientFactory.createClient(conf, clusterId);
+ImmutableMap m = createStubs(c, 
bootstrapNodes);
+rpcClient = c;
+addr2Stub = m;
+futu

Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477006844


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,

Review Comment:
   I could have a try but it is not easy. Whether we should make the rpc call 
depends on volatile fields here...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477006736


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,
+RpcControllerFactory rpcControllerFactory, Set bootstrapNodes) 
{
+this.conf = conf;
+if (User.isHBaseSecurityEnabled(conf)) {
+  this.noAuthConf = new Configuration(conf);
+  this.noAuthConf.set(User.HBASE_SECURITY_CONF_KEY, "simple");
+} else {
+  this.noAuthConf = conf;
+}
+this.user = user;
+this.rpcControllerFactory = rpcControllerFactory;
+this.bootstrapNodes = Collections.unmodifiableSet(bootstrapNodes);
+this.rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE,
+  conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
+  }
+
+  private ImmutableMap 
createStubs(RpcClient rpcClient,
+Collection addrs) {
+LOG.debug("Going to use new servers to create stubs: {}", addrs);
+Preconditions.checkNotNull(addrs);
+ImmutableMap.Builder builder =
+  ImmutableMap.builderWithExpectedSize(addrs.size());
+for (ServerName masterAddr : addrs) {
+  builder.put(masterAddr,
+ClientMetaService.newStub(rpcClient.createRpcChannel(masterAddr, user, 
rpcTimeoutMs)));
+}
+return builder.build();
+  }
+
+  private void createStubsAndComplete(String clusterId,
+CompletableFuture> 
future) {
+RpcClient c = RpcClientFactory.createClient(conf, clusterId);
+ImmutableMap m = createStubs(c, 
bootstrapNodes);
+rpcClient = c;
+addr2Stub = m;
+futu

Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477006618


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,
+RpcControllerFactory rpcControllerFactory, Set bootstrapNodes) 
{
+this.conf = conf;
+if (User.isHBaseSecurityEnabled(conf)) {
+  this.noAuthConf = new Configuration(conf);
+  this.noAuthConf.set(User.HBASE_SECURITY_CONF_KEY, "simple");
+} else {
+  this.noAuthConf = conf;
+}
+this.user = user;
+this.rpcControllerFactory = rpcControllerFactory;
+this.bootstrapNodes = Collections.unmodifiableSet(bootstrapNodes);
+this.rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE,
+  conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
+  }
+
+  private ImmutableMap 
createStubs(RpcClient rpcClient,
+Collection addrs) {
+LOG.debug("Going to use new servers to create stubs: {}", addrs);
+Preconditions.checkNotNull(addrs);
+ImmutableMap.Builder builder =
+  ImmutableMap.builderWithExpectedSize(addrs.size());
+for (ServerName masterAddr : addrs) {
+  builder.put(masterAddr,
+ClientMetaService.newStub(rpcClient.createRpcChannel(masterAddr, user, 
rpcTimeoutMs)));
+}
+return builder.build();
+  }
+
+  private void createStubsAndComplete(String clusterId,
+CompletableFuture> 
future) {
+RpcClient c = RpcClientFactory.createClient(conf, clusterId);
+ImmutableMap m = createStubs(c, 
bootstrapNodes);
+rpcClient = c;
+addr2Stub = m;
+futu

Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477006153


##
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java:
##
@@ -688,41 +691,75 @@ private void doBadPreambleHandling(String msg) throws 
IOException {
   }
 
   private void doBadPreambleHandling(String msg, Exception e) throws 
IOException {
-RpcServer.LOG.warn(msg);
+RpcServer.LOG.warn(msg, e);
 doRespond(getErrorResponse(msg, e));
   }
 
+  private boolean doConnectionRegistryResponse() throws IOException {
+if (!(rpcServer.server instanceof ConnectionRegistryEndpoint)) {
+  // should be in tests or some scenarios where we should not reach here
+  return false;
+}
+// on backup masters, this request may be blocked since we need to fetch 
it from filesystem,
+// but since it is just backup master, it is not a critical problem
+String clusterId = ((ConnectionRegistryEndpoint) 
rpcServer.server).getClusterId();

Review Comment:
   The request is blocked, not the backup masters are blocked.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477005895


##
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcConnection.java:
##
@@ -303,12 +309,16 @@ protected void initChannel(Channel ch) throws Exception {
   .addListener(new ChannelFutureListener() {
 
 private void succeed(Channel ch) throws IOException {
-  ch.writeAndFlush(connectionHeaderPreamble.retainedDuplicate());
+  if (connectionRegistryCall != null) {
+getConnectionRegistry(ch);
+return;
+  }
+  NettyFutureUtils.safeWriteAndFlush(ch, 
connectionHeaderPreamble.retainedDuplicate());
   if (useSasl) {
 saslNegotiate(ch);
   } else {
 // send the connection header to server
-ch.write(connectionHeaderWithLength.retainedDuplicate());
+NettyFutureUtils.safeWrite(ch, 
connectionHeaderWithLength.retainedDuplicate());

Review Comment:
   Just some error prone warning on 'future return value ignored', not critical.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


Apache9 commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1477005847


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:
##
@@ -663,4 +664,13 @@ static void setCoprocessorError(RpcController controller, 
Throwable error) {
   controller.setFailed(error.toString());
 }
   }
+
+  static boolean isUnexpectedPreambleHeaderException(IOException e) {
+if (!(e instanceof RemoteException)) {
+  return false;
+}
+RemoteException re = (RemoteException) e;
+return FatalConnectionException.class.getName().equals(re.getClassName())

Review Comment:
   I also want to check for a specific exception but actually the exception is 
thrown from old code...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1925186204

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 26s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 232m 42s |  hbase-server in the patch failed.  |
   |  |   | 254m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a4321290416b 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 
11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/testReport/
 |
   | Max. process+thread count | 5298 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1925185732

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 229m 18s |  hbase-server in the patch failed.  |
   |  |   | 251m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c78f5e356d1f 5.4.0-169-generic #187-Ubuntu SMP Thu Nov 23 
14:52:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/testReport/
 |
   | Max. process+thread count | 4701 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1925039117

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 25s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 46s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  26m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 2a22aeb75b03 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 
11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 81 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1925032423

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 234m 53s |  hbase-server in the patch failed.  |
   |  |   | 259m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 70215ae2ce60 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 
11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/testReport/
 |
   | Max. process+thread count | 5485 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1925031492

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 20s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  4s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 229m 54s |  hbase-server in the patch passed.  
|
   |  |   | 255m 40s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 55c9fc37be44 5.4.0-169-generic #187-Ubuntu SMP Thu Nov 23 
14:52:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/testReport/
 |
   | Max. process+thread count | 4671 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


Apache-HBase commented on PR #5654:
URL: https://github.com/apache/hbase/pull/5654#issuecomment-1924880958

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 32s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 43s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 49s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  28m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5654 |
   | JIRA Issue | HBASE-27687 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 4bbf90c56de8 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 
11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c4e332a93b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 78 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5654/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476808209


##
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/DefaultOperationQuota.java:
##
@@ -139,9 +156,16 @@ public void addMutation(final Mutation mutation) {
* @param numScans  the number of scan requests
*/
   protected void updateEstimateConsumeQuota(int numWrites, int numReads, int 
numScans) {
-writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100);
-readConsumed = estimateConsume(OperationType.GET, numReads, 100);
-readConsumed += estimateConsume(OperationType.SCAN, numScans, 1000);
+if (useBlockBytesScanned) {
+  writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100);

Review Comment:
   this is a nit, but can you move the writeConsumed out of the if blocks so we 
don't duplicate?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28339) HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to reconnect

2024-02-02 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813841#comment-17813841
 ] 

Viraj Jasani commented on HBASE-28339:
--

Thank you [~andor]!!

> HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to 
> reconnect
> --
>
> Key: HBASE-28339
> URL: https://issues.apache.org/jira/browse/HBASE-28339
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 2.4.17, 3.0.0-beta-1, 2.5.7, 2.7.0
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Asbtract base class {{HBaseReplicationEndpoint}} and therefore 
> {{HBaseInterClusterReplicationEndpoint}} creates new ZooKeeper client 
> instance every time there's an error occurs in communication and it tries to 
> reconnect. This was not a problem with ZooKeeper 3.4.x versions, because the 
> TGT Login thread was a static reference and only created once for all clients 
> in the same JVM. With the upgrade to ZooKeeper 3.5.x the login thread is 
> dedicated to the client instance, hence we have a new login thread every time 
> the replication endpoint reconnects.
> {code:java}
> /**
>  * A private method used to re-establish a zookeeper session with a peer 
> cluster.
>  */
> protected void reconnect(KeeperException ke) {
>   if (
> ke instanceof ConnectionLossException || ke instanceof 
> SessionExpiredException
>   || ke instanceof AuthFailedException
>   ) {
> String clusterKey = ctx.getPeerConfig().getClusterKey();
> LOG.warn("Lost the ZooKeeper connection for peer " + clusterKey, ke);
> try {
>   reloadZkWatcher();
> } catch (IOException io) {
>   LOG.warn("Creation of ZookeeperWatcher failed for peer " + clusterKey, 
> io);
> }
>   }
> }{code}
> {code:java}
> /**
>  * Closes the current ZKW (if not null) and creates a new one
>  * @throws IOException If anything goes wrong connecting
>  */
> synchronized void reloadZkWatcher() throws IOException {
>   if (zkw != null) zkw.close();
>   zkw = new ZKWatcher(ctx.getConfiguration(), "connection to cluster: " + 
> ctx.getPeerId(), this);
>   getZkw().registerListener(new PeerRegionServerListener(this));
> } {code}
> If the target cluster of replication is unavailable for some reason, the 
> replication endpoint keeps trying to reconnect to ZooKeeper destroying and 
> creating new Login threads constantly which will carpet bomb the KDC host 
> with login requests.
>  
> I'm not sure how to fix this yet, trying to create a unit test first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476729752


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java:
##
@@ -2841,6 +2845,9 @@ public MultiResponse multi(final RpcController rpcc, 
final MultiRequest request)
 spaceQuotaEnforcement);
 }
   } finally {
+if (context != null) {

Review Comment:
   Hmm actually maybe it's better to keep this here and to instead handle 
avoiding double counting append/increment, i.e. by passing in a isMulti boolean 
or something



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476610558


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java:
##
@@ -2841,6 +2845,9 @@ public MultiResponse multi(final RpcController rpcc, 
final MultiRequest request)
 spaceQuotaEnforcement);
 }
   } finally {
+if (context != null) {

Review Comment:
   I dont think this works out so well, because of how some of the methods are 
abstracted. For example, `increment` can be called from the single `mutate` 
method and also from `doNonAtomicRegionMutation` (which is called from 
`multi`). So for increments, we'd double account them.
   
   I might recommend only adding calls to addBlockBytesScanned wherever there 
are existing calls to getBlockBytesScanned() for metrics. As I mentioned in 
another comment, you'll want to move the getBlockBytesScanned() outside the if 
(metrics != null) checks, but otherwise those are probably the best places to 
add the quota calls if possible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476513780


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java:
##
@@ -690,6 +690,7 @@ private Result increment(final HRegion region, final 
OperationQuota quota,
 if (metricsRegionServer != null) {
   long blockBytesScanned =
 context != null ? context.getBlockBytesScanned() - 
blockBytesScannedBefore : 0;
+  quota.addBlockBytesScanned(blockBytesScanned);

Review Comment:
   Also you missed adding this to `append`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476496415


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java:
##
@@ -3041,6 +3048,7 @@ private CheckAndMutateResult checkAndMutate(HRegion 
region, OperationQuota quota
   long after = EnvironmentEdgeManager.currentTime();
   long blockBytesScanned =
 context != null ? context.getBlockBytesScanned() - 
blockBytesScannedBefore : 0;
+  quota.addBlockBytesScanned(blockBytesScanned);

Review Comment:
   same here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27687: support consumption of `block bytes scanned` in operation quota [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5654:
URL: https://github.com/apache/hbase/pull/5654#discussion_r1476495524


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java:
##
@@ -690,6 +690,7 @@ private Result increment(final HRegion region, final 
OperationQuota quota,
 if (metricsRegionServer != null) {
   long blockBytesScanned =
 context != null ? context.getBlockBytesScanned() - 
blockBytesScannedBefore : 0;
+  quota.addBlockBytesScanned(blockBytesScanned);

Review Comment:
   this may not be a huge issue, but i think this shouldn't be within the 
metricsRegionServer != null check. Maybe move this and the above line up outside



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-25051) DIGEST based auth broken for rpc based ConnectionRegistry

2024-02-02 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813781#comment-17813781
 ] 

Bryan Beaudreault commented on HBASE-25051:
---

[~bharathv] if you are still around, do you have time to help with review of 
the PR?

> DIGEST based auth broken for rpc based ConnectionRegistry
> -
>
> Key: HBASE-25051
> URL: https://issues.apache.org/jira/browse/HBASE-25051
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, security
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Duo Zhang
>Priority: Minor
>  Labels: pull-request-available
>
> DIGEST-MD5 based sasl auth depends on cluster-ID to obtain tokens. With 
> master registry, we have a circular dependency here because master registry 
> needs an rpcClient to talk to masters (and to get cluster ID) and rpc-Client 
> needs a clusterId if DIGEST based auth is configured. Earlier, there was a ZK 
> client that has its own authentication mechanism to fetch the cluster ID.
> HBASE-23330, I think doesn't fully fix the problem. It depends on an active 
> connection to fetch delegation tokens for the MR job and that inherently 
> assumes that the active connection does not use a DIGEST auth.
> It is not clear to me how common it is to use DIGEST based auth in 
> connections.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-25051 DIGEST based auth broken for rpc based ConnectionRegistry [hbase]

2024-02-02 Thread via GitHub


bbeaudreault commented on code in PR #5631:
URL: https://github.com/apache/hbase/pull/5631#discussion_r1476392904


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryRpcStubHolder.java:
##
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.hadoop.hbase.ipc.RpcClientFactory;
+import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.IOExceptionSupplier;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel;
+
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ClientMetaService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.ConnectionRegistryService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegistryProtos.GetConnectionRegistryResponse;
+
+/**
+ * A class for creating {@link RpcClient} and related stubs used by
+ * {@link AbstractRpcBasedConnectionRegistry}. We need to connect to bootstrap 
nodes to get the
+ * cluster id first, before creating the final {@link RpcClient} and related 
stubs.
+ * 
+ * See HBASE-25051 for more details.
+ */
+@InterfaceAudience.Private
+class ConnectionRegistryRpcStubHolder implements Closeable {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ConnectionRegistryRpcStubHolder.class);
+
+  private final Configuration conf;
+
+  // used for getting cluster id
+  private final Configuration noAuthConf;
+
+  private final User user;
+
+  private final RpcControllerFactory rpcControllerFactory;
+
+  private final Set bootstrapNodes;
+
+  private final int rpcTimeoutMs;
+
+  private volatile ImmutableMap 
addr2Stub;
+
+  private volatile RpcClient rpcClient;
+
+  private CompletableFuture> addr2StubFuture;
+
+  ConnectionRegistryRpcStubHolder(Configuration conf, User user,
+RpcControllerFactory rpcControllerFactory, Set bootstrapNodes) 
{
+this.conf = conf;
+if (User.isHBaseSecurityEnabled(conf)) {
+  this.noAuthConf = new Configuration(conf);
+  this.noAuthConf.set(User.HBASE_SECURITY_CONF_KEY, "simple");
+} else {
+  this.noAuthConf = conf;
+}
+this.user = user;
+this.rpcControllerFactory = rpcControllerFactory;
+this.bootstrapNodes = Collections.unmodifiableSet(bootstrapNodes);
+this.rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE,
+  conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, 
HConstants.DEFAULT_HBASE_RPC_TIMEOUT));
+  }
+
+  private ImmutableMap 
createStubs(RpcClient rpcClient,
+Collection addrs) {
+LOG.debug("Going to use new servers to create stubs: {}", addrs);
+Preconditions.checkNotNull(addrs);
+ImmutableMap.Builder builder =
+  ImmutableMap.builderWithExpectedSize(addrs.size());
+for (ServerName masterAddr : addrs) {
+  builder.put(masterAddr,
+ClientMetaService.newStub(rpcClient.createRpcChannel(masterAddr, user, 
rpcTimeoutMs)));
+}
+return builder.build();
+  }
+
+  private void createStubsAndComplete(String clusterId,
+CompletableFuture> 
future) {
+RpcClient c = RpcClientFactory.createClient(conf, clusterId);
+ImmutableMap m = createStubs(c, 
bootstrapNodes);
+rpcClient = c;
+addr2Stub = m;
+   

[jira] [Commented] (HBASE-28339) HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to reconnect

2024-02-02 Thread Andor Molnar (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813745#comment-17813745
 ] 

Andor Molnar commented on HBASE-28339:
--

Turned out that this is an issue in ZooKeeper, not in HBase. Hbase's all 
ZooKeeper connections are affected, not just the replication.

ZK introduced the Login-thread per client feature in ZOOKEEPER-2139, but in the 
implementation it shuts down and creates a new ZooKeeperSaslClient (and Login 
thread) every time it tries to connect. HBase will wait for the answer 
indefinitely, because the ZK request timeout is set to 0 by default.

ClientCnxn.startConnect()
{code:java}
if (clientConfig.isSaslClientEnabled()) {
try {
if (zooKeeperSaslClient != null) {
zooKeeperSaslClient.shutdown();
}
zooKeeperSaslClient = new 
ZooKeeperSaslClient(SaslServerPrincipal.getServerPrincipal(addr, clientConfig),
clientConfig);
... {code}
I close this ticket and upload a fix for ZooKeeper soon.

cc [~zhangduo] 

> HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to 
> reconnect
> --
>
> Key: HBASE-28339
> URL: https://issues.apache.org/jira/browse/HBASE-28339
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 2.4.17, 3.0.0-beta-1, 2.5.7, 2.7.0
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Asbtract base class {{HBaseReplicationEndpoint}} and therefore 
> {{HBaseInterClusterReplicationEndpoint}} creates new ZooKeeper client 
> instance every time there's an error occurs in communication and it tries to 
> reconnect. This was not a problem with ZooKeeper 3.4.x versions, because the 
> TGT Login thread was a static reference and only created once for all clients 
> in the same JVM. With the upgrade to ZooKeeper 3.5.x the login thread is 
> dedicated to the client instance, hence we have a new login thread every time 
> the replication endpoint reconnects.
> {code:java}
> /**
>  * A private method used to re-establish a zookeeper session with a peer 
> cluster.
>  */
> protected void reconnect(KeeperException ke) {
>   if (
> ke instanceof ConnectionLossException || ke instanceof 
> SessionExpiredException
>   || ke instanceof AuthFailedException
>   ) {
> String clusterKey = ctx.getPeerConfig().getClusterKey();
> LOG.warn("Lost the ZooKeeper connection for peer " + clusterKey, ke);
> try {
>   reloadZkWatcher();
> } catch (IOException io) {
>   LOG.warn("Creation of ZookeeperWatcher failed for peer " + clusterKey, 
> io);
> }
>   }
> }{code}
> {code:java}
> /**
>  * Closes the current ZKW (if not null) and creates a new one
>  * @throws IOException If anything goes wrong connecting
>  */
> synchronized void reloadZkWatcher() throws IOException {
>   if (zkw != null) zkw.close();
>   zkw = new ZKWatcher(ctx.getConfiguration(), "connection to cluster: " + 
> ctx.getPeerId(), this);
>   getZkw().registerListener(new PeerRegionServerListener(this));
> } {code}
> If the target cluster of replication is unavailable for some reason, the 
> replication endpoint keeps trying to reconnect to ZooKeeper destroying and 
> creating new Login threads constantly which will carpet bomb the KDC host 
> with login requests.
>  
> I'm not sure how to fix this yet, trying to create a unit test first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28339) HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to reconnect

2024-02-02 Thread Andor Molnar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andor Molnar resolved HBASE-28339.
--
Resolution: Invalid

> HBaseReplicationEndpoint creates new ZooKeeper client every time it tries to 
> reconnect
> --
>
> Key: HBASE-28339
> URL: https://issues.apache.org/jira/browse/HBASE-28339
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 2.4.17, 3.0.0-beta-1, 2.5.7, 2.7.0
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Asbtract base class {{HBaseReplicationEndpoint}} and therefore 
> {{HBaseInterClusterReplicationEndpoint}} creates new ZooKeeper client 
> instance every time there's an error occurs in communication and it tries to 
> reconnect. This was not a problem with ZooKeeper 3.4.x versions, because the 
> TGT Login thread was a static reference and only created once for all clients 
> in the same JVM. With the upgrade to ZooKeeper 3.5.x the login thread is 
> dedicated to the client instance, hence we have a new login thread every time 
> the replication endpoint reconnects.
> {code:java}
> /**
>  * A private method used to re-establish a zookeeper session with a peer 
> cluster.
>  */
> protected void reconnect(KeeperException ke) {
>   if (
> ke instanceof ConnectionLossException || ke instanceof 
> SessionExpiredException
>   || ke instanceof AuthFailedException
>   ) {
> String clusterKey = ctx.getPeerConfig().getClusterKey();
> LOG.warn("Lost the ZooKeeper connection for peer " + clusterKey, ke);
> try {
>   reloadZkWatcher();
> } catch (IOException io) {
>   LOG.warn("Creation of ZookeeperWatcher failed for peer " + clusterKey, 
> io);
> }
>   }
> }{code}
> {code:java}
> /**
>  * Closes the current ZKW (if not null) and creates a new one
>  * @throws IOException If anything goes wrong connecting
>  */
> synchronized void reloadZkWatcher() throws IOException {
>   if (zkw != null) zkw.close();
>   zkw = new ZKWatcher(ctx.getConfiguration(), "connection to cluster: " + 
> ctx.getPeerId(), this);
>   getZkw().registerListener(new PeerRegionServerListener(this));
> } {code}
> If the target cluster of replication is unavailable for some reason, the 
> replication endpoint keeps trying to reconnect to ZooKeeper destroying and 
> creating new Login threads constantly which will carpet bomb the KDC host 
> with login requests.
>  
> I'm not sure how to fix this yet, trying to create a unit test first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28340) Add trust/key store type to ZK TLS settings handled by HBase

2024-02-02 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813686#comment-17813686
 ] 

Duo Zhang commented on HBASE-28340:
---

How does the wildcard match work? Could you please provide something like a POC 
to show it?

I checked the PR for HBASE-28038

https://github.com/apache/hbase/pull/5370/files

Seems we have to add the configs one by one?

Thanks.

> Add trust/key store type to ZK TLS settings handled by HBase
> 
>
> Key: HBASE-28340
> URL: https://issues.apache.org/jira/browse/HBASE-28340
> Project: HBase
>  Issue Type: Sub-task
>  Components: Zookeeper
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Let's add the following settings as well. Last time we missed it.
> {noformat}
> zookeeper.ssl.keyStore.type
> zookeeper.ssl.trustStore.type{noformat}
> Handle them in hbase-site.xml as:
> {noformat}
> hbase.zookeeper.property.ssl.keyStore.type
> hbase.zookeeper.property.ssl.trustStore.type{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28340) Add trust/key store type to ZK TLS settings handled by HBase

2024-02-02 Thread Andor Molnar (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813643#comment-17813643
 ] 

Andor Molnar commented on HBASE-28340:
--

[~zhangduo] [~bbeaudreault] 

Shall I add a wildcard match for ZooKeeper tls params like {{zookeeper.ssl.*}} 
instead of adding them one-by-one?

Any concerns from your side?

> Add trust/key store type to ZK TLS settings handled by HBase
> 
>
> Key: HBASE-28340
> URL: https://issues.apache.org/jira/browse/HBASE-28340
> Project: HBase
>  Issue Type: Sub-task
>  Components: Zookeeper
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Let's add the following settings as well. Last time we missed it.
> {noformat}
> zookeeper.ssl.keyStore.type
> zookeeper.ssl.trustStore.type{noformat}
> Handle them in hbase-site.xml as:
> {noformat}
> hbase.zookeeper.property.ssl.keyStore.type
> hbase.zookeeper.property.ssl.trustStore.type{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)