[jira] [Commented] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709599#comment-17709599
 ] 

Hudson commented on HBASE-27778:


Results for branch branch-2
[build #786 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3, 2.4.16, 2.5.4
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27765) Add biggest cell related info into web ui

2023-04-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709598#comment-17709598
 ] 

Hudson commented on HBASE-27765:


Results for branch branch-2
[build #786 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/786/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add biggest cell related info into web ui
> -
>
> Key: HBASE-27765
> URL: https://issues.apache.org/jira/browse/HBASE-27765
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, UI
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> There are some disadvantages to large cell, such as can't be cached or cause 
> memory fragmentation, but currently user can't easily to find them out.
> My proposal is save len and key of the biggest cell into fileinfo of hfile, 
> and shown on web ui, including two places.
> 1: Add "Len Of Biggest Cell" into main page of regionServer, in here we can 
> find out which regions has large cell by sorting.
> 2: Add "Len Of Biggest Cell" and "Key Of Biggest Cell" into region page, in 
> here we can find out the exactly key and the hfile.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-22978) Online slow response log

2023-04-06 Thread Liangjun He (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liangjun He updated HBASE-22978:

Attachment: Alluxio 开源AI和大数据存储编排平台.pdf
Flink Table Store 流计算存储.pptx

> Online slow response log
> 
>
> Key: HBASE-22978
> URL: https://issues.apache.org/jira/browse/HBASE-22978
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Operability, regionserver, shell
>Affects Versions: 3.0.0-alpha-1, 2.3.0
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: Alluxio 开源AI和大数据存储编排平台.pdf, Flink Table Store 
> 流计算存储.pptx, 
> NamedQueue_Framework_Design_HBASE-24528_HBASE-22978_HBASE-24718.pdf, Screen 
> Shot 2019-10-19 at 2.31.59 AM.png, Screen Shot 2019-10-19 at 2.32.54 AM.png, 
> Screen Shot 2019-10-19 at 2.34.11 AM.png, Screen Shot 2019-10-19 at 2.36.14 
> AM.png
>
>
> Today when an individual RPC exceeds a configurable time bound we log a 
> complaint by way of the logging subsystem. These log lines look like:
> {noformat}
> 2019-08-30 22:10:36,195 WARN [,queue=15,port=60020] ipc.RpcServer - 
> (responseTooSlow):
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)",
> "starttimems":1567203007549,
> "responsesize":6819737,
> "method":"Scan",
> "param":"region { type: REGION_NAME value: 
> \"tsdb,\\000\\000\\215\\f)o\\024\\302\\220\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\006\\000\\000\\000\\000\\000\\005\\000\\000",
> "processingtimems":28646,
> "client":"10.253.196.215:41116",
> "queuetimems":22453,
> "class":"HRegionServer"}
> {noformat}
> Unfortunately we often truncate the request parameters, like in the above 
> example. We do this because the human readable representation is verbose, the 
> rate of too slow warnings may be high, and the combination of these things 
> can overwhelm the log capture system. The truncation is unfortunate because 
> it eliminates much of the utility of the warnings. For example, the region 
> name, the start and end keys, and the filter hierarchy are all important 
> clues for debugging performance problems caused by moderate to low 
> selectivity queries or queries made at a high rate.
> We can maintain an in-memory ring buffer of requests that were judged to be 
> too slow in addition to the responseTooSlow logging. The in-memory 
> representation can be complete and compressed. A new admin API and shell 
> command can provide access to the ring buffer for online performance 
> debugging. A modest sizing of the ring buffer will prevent excessive memory 
> utilization for a minor performance debugging feature by limiting the total 
> number of retained records. There is some chance a high rate of requests will 
> cause information on other interesting requests to be overwritten before it 
> can be read. This is the nature of a ring buffer and an acceptable trade off.
> The write request types do not require us to retain all information submitted 
> in the request. We don't need to retain all key-values in the mutation, which 
> may be too large to comfortably retain. We only need a unique set of row 
> keys, or even a min/max range, and total counts.
> The consumers of this information will be debugging tools. We can afford to 
> apply fast compression to ring buffer entries (if codec support is 
> available), something like snappy or zstandard, and decompress on the fly 
> when servicing the retrieval API request. This will minimize the impact of 
> retaining more information about slow requests than we do today.
> This proposal is for retention of request information only, the same 
> information provided by responseTooSlow warnings. Total size of response 
> serialization, possibly also total cell or row counts, should be sufficient 
> to characterize the response.
> Optionally persist new entries added to the ring buffer into one or more 
> files in HDFS in a write-behind manner. If the HDFS writer blocks or falls 
> behind and we are unable to persist an entry before it is overwritten, that 
> is fine. Response too slow logging is best effort. If we can detect this make 
> a note of it in the log file. Provide a tool for parsing, dumping, filtering, 
> and pretty printing the slow logs written to HDFS. The tool and the shell can 
> share and reuse some utility classes and methods for accomplishing that. 
> —
> New shell commands:
> {{get_slow_responses [  ... ,  ] [ , \{  
> } ]}}
> Retrieve, decode, and pretty print the contents of the too slow response ring 
> buffer maintained by the given list of servers; or all servers in the cluster 
> if no list is provided. Optionally provide a map of parameters for filtering 
> as additional argument. The TABLE filter, which expects a string 

[GitHub] [hbase] 2005hithlj commented on a diff in pull request #5157: HBASE-27775 Use a separate WAL provider for hbase:replication table

2023-04-06 Thread via GitHub


2005hithlj commented on code in PR #5157:
URL: https://github.com/apache/hbase/pull/5157#discussion_r1160384532


##
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/LazyInitializedWALProvider.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicReference;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.regionserver.wal.MetricsWAL;
+import org.apache.hadoop.hbase.wal.WALFactory.Providers;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * A lazy initialized WAL provider for holding the WALProvider for some 
special tables, such as
+ * hbase:meta, hbase:replication, etc.
+ */
+@InterfaceAudience.Private
+class LazyInitializedWALProvider implements Closeable {
+
+  private final WALFactory factory;
+
+  private final String providerId;
+
+  private final String providerConfigName;
+
+  private final Abortable abortable;
+
+  private final AtomicReference holder = new AtomicReference<>();
+
+  LazyInitializedWALProvider(WALFactory factory, String providerId, String 
providerConfigName,
+Abortable abortable) {
+this.factory = factory;
+this.providerId = providerId;
+this.providerConfigName = providerConfigName;
+this.abortable = abortable;
+  }
+
+  WALProvider getProvider() throws IOException {
+Configuration conf = factory.getConf();
+for (;;) {
+  WALProvider provider = this.holder.get();
+  if (provider != null) {
+return provider;
+  }
+  Class clz = null;
+  if (conf.get(providerConfigName) == null) {
+try {
+  clz = conf.getClass(WALFactory.WAL_PROVIDER, 
Providers.defaultProvider.clazz,
+WALProvider.class);
+} catch (Throwable t) {
+  // the WAL provider should be an enum. Proceed
+}
+  }
+  if (clz == null) {
+clz = factory.getProviderClass(providerConfigName,
+  conf.get(WALFactory.WAL_PROVIDER, WALFactory.DEFAULT_WAL_PROVIDER));
+  }
+  provider = WALFactory.createProvider(clz);
+  provider.init(factory, conf, providerId, this.abortable);
+  provider.addWALActionsListener(new MetricsWAL());
+  if (this.holder.compareAndSet(null, provider)) {
+return provider;
+  } else {
+// someone is ahead of us, close and try again.
+provider.close();

Review Comment:
   @Apache9 sir.
   After closing here, is the provider retrieved from this.holder.get() (57th 
line of code)  that has already been closed?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] 2005hithlj commented on pull request #5157: HBASE-27775 Use a separate WAL provider for hbase:replication table

2023-04-06 Thread via GitHub


2005hithlj commented on PR #5157:
URL: https://github.com/apache/hbase/pull/5157#issuecomment-1499857265

   @Apache9  sir.
   Also, do we need to add a new UT to cover our newly added code?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27760) Release 2.5.4

2023-04-06 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709539#comment-17709539
 ] 

Andrew Kyle Purtell commented on HBASE-27760:
-

RC0 complete, testing almost finished. Sending vote email out tonight. 

> Release 2.5.4
> -
>
> Key: HBASE-27760
> URL: https://issues.apache.org/jira/browse/HBASE-27760
> Project: HBase
>  Issue Type: Umbrella
>  Components: community
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5155: HBASE-27536: add Scan to slow log payload

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5155:
URL: https://github.com/apache/hbase/pull/5155#issuecomment-1499689573

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  4s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 12s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 39s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 14s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 22s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 219m  4s |  hbase-server in the patch failed.  |
   |  |   | 251m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5155 |
   | JIRA Issue | HBASE-27536 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7eaf2b263331 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/testReport/
 |
   | Max. process+thread count | 2468 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client hbase-server 
U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5155: HBASE-27536: add Scan to slow log payload

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5155:
URL: https://github.com/apache/hbase/pull/5155#issuecomment-1499679050

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 22s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 24s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 42s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  9s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 212m 18s |  hbase-server in the patch passed.  
|
   |  |   | 240m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5155 |
   | JIRA Issue | HBASE-27536 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c226a79d5fdf 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/testReport/
 |
   | Max. process+thread count | 2636 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client hbase-server 
U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27770) Complete Release 2.4.17

2023-04-06 Thread Tak-Lon (Stephen) Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709499#comment-17709499
 ] 

Tak-Lon (Stephen) Wu commented on HBASE-27770:
--

Done

- tag rel/2.4.17 pushed
- JIRA board for release 2.4.17 closed.
- moved dist-dev to dist-release, https://downloads.apache.org/hbase/2.4.17/
- updated release data on https://reporter.apache.org/addrelease.html?hbase

Pending
- waiting for reviewer on HBASE-27772
- wait for/trigger the build for the download page.
- then send the announcement email

> Complete Release 2.4.17
> ---
>
> Key: HBASE-27770
> URL: https://issues.apache.org/jira/browse/HBASE-27770
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>
> # Release the artifacts on repository.apache.org
>  # Move the binaries from dist-dev to dist-release
>  # Add xml to download page
>  # Push tag 2.4.17RC0 as tag rel/2.4.17
>  # Release 2.4.17 on JIRA 
> https://issues.apache.org/jira/projects/HBASE/versions/12352760
>  # Add release data on [https://reporter.apache.org/addrelease.html?hbase]
>  # Send announcement email



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] taklwu commented on pull request #5161: HBASE-27772 Add 2.4.17 to download page

2023-04-06 Thread via GitHub


taklwu commented on PR #5161:
URL: https://github.com/apache/hbase/pull/5161#issuecomment-1499627882

   https://downloads.apache.org/hbase/2.4.17/ also published


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Work started] (HBASE-27770) Complete Release 2.4.17

2023-04-06 Thread Tak-Lon (Stephen) Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27770 started by Tak-Lon (Stephen) Wu.

> Complete Release 2.4.17
> ---
>
> Key: HBASE-27770
> URL: https://issues.apache.org/jira/browse/HBASE-27770
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>
> # Release the artifacts on repository.apache.org
>  # Move the binaries from dist-dev to dist-release
>  # Add xml to download page
>  # Push tag 2.4.17RC0 as tag rel/2.4.17
>  # Release 2.4.17 on JIRA 
> https://issues.apache.org/jira/projects/HBASE/versions/12352760
>  # Add release data on [https://reporter.apache.org/addrelease.html?hbase]
>  # Send announcement email



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] taklwu commented on pull request #5161: HBASE-27772 Add 2.4.17 to download page

2023-04-06 Thread via GitHub


taklwu commented on PR #5161:
URL: https://github.com/apache/hbase/pull/5161#issuecomment-1499585273

   committed to release page 
https://dist.apache.org/repos/dist/release/hbase/2.4.17/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709494#comment-17709494
 ] 

Bryan Beaudreault edited comment on HBASE-27782 at 4/6/23 7:47 PM:
---

I see the only place we provide handling of "exceptionCaught" in our netty 
setup is in NettyRpcDuplexHandler. Looks like we add that handler after the 
connection has been established. Do you think we need to add a handler that 
exists prior to connection establishment? This error here is being thrown early 
in the handshake.

(I am not very well versed in netty, so let me know if this doesn't make sense)


was (Author: bbeaudreault):
I see the only place we call exceptionCaught in our netty setup is in 
NettyRpcDuplexHandler. Looks like we add that handler after the connection has 
been established. Do you think we need to add a handler that exists prior to 
connection establishment? This error here is being thrown early in the 
handshake.

(I am not very well versed in netty, so let me know if this doesn't make sense)

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did not handle the exception.
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> 

[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709494#comment-17709494
 ] 

Bryan Beaudreault commented on HBASE-27782:
---

I see the only place we call exceptionCaught in our netty setup is in 
NettyRpcDuplexHandler. Looks like we add that handler after the connection has 
been established. Do you think we need to add a handler that exists prior to 
connection establishment? This error here is being thrown early in the 
handshake.

(I am not very well versed in netty, so let me know if this doesn't make sense)

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did not handle the exception.
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> 

[jira] [Updated] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27782:
--
Description: 
I was chaos testing the new native TLS, forcing a certificate to expire and 
fail handshake. The handshake failure properly causes submitted requests to 
fail, but I see the following "unhandled exception" like message:
{code:java}
WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
fired, and it reached at the tail of the pipeline. It usually means the last 
handler in the pipeline did not handle the exception.
org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
        at 
org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
certificate_expired
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
        at 
java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
        at 
java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at 
java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
        at 
java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
        at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
        ... 17 common frames omitted{code}

  was:
I was chaos testing the new native TLS, forcing a certificate to expire and 
fail handshake. The handshake failure properly causes submitted requests 

[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709491#comment-17709491
 ] 

Bryan Beaudreault commented on HBASE-27782:
---

I updated to 4.1.4. The issue still persists, but some line numbers have 
changed. I updated the stacktrace in the main description to match.

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did> 
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>         at 
> 

[jira] [Updated] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27782:
--
Description: 
I was chaos testing the new native TLS, forcing a certificate to expire and 
fail handshake. The handshake failure properly causes submitted requests to 
fail, but I see the following "unhandled exception" like message:
{code:java}
WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
fired, and it reached at the tail of the pipeline. It usually means the last 
handler in the pipeline did> 
org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
        at 
org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
certificate_expired
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
        at 
java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
        at 
java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at 
java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
        at 
java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
        at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
        ... 17 common frames omitted{code}

  was:
I was chaos testing the new native TLS, forcing a certificate to expire and 
fail handshake. The handshake failure properly causes submitted requests to 
fail, but I see the 

[jira] [Commented] (HBASE-27765) Add biggest cell related info into web ui

2023-04-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709486#comment-17709486
 ] 

Hudson commented on HBASE-27765:


Results for branch master
[build #812 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add biggest cell related info into web ui
> -
>
> Key: HBASE-27765
> URL: https://issues.apache.org/jira/browse/HBASE-27765
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile, UI
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> There are some disadvantages to large cell, such as can't be cached or cause 
> memory fragmentation, but currently user can't easily to find them out.
> My proposal is save len and key of the biggest cell into fileinfo of hfile, 
> and shown on web ui, including two places.
> 1: Add "Len Of Biggest Cell" into main page of regionServer, in here we can 
> find out which regions has large cell by sorting.
> 2: Add "Len Of Biggest Cell" and "Key Of Biggest Cell" into region page, in 
> here we can find out the exactly key and the hfile.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27662) Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs

2023-04-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709487#comment-17709487
 ] 

Hudson commented on HBASE-27662:


Results for branch master
[build #812 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/812/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Correct the line logged with flag hbase.procedure.upgrade-to-2-2 in docs
> 
>
> Key: HBASE-27662
> URL: https://issues.apache.org/jira/browse/HBASE-27662
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yash Dodeja
>Assignee: Yash Dodeja
>Priority: Minor
> Fix For: 3.0.0-alpha-4
>
>
> The https://hbase.apache.org/book.html#upgrade2.2 doc says to search for a 
> "READY TO ROLLING UPGRADE" log in master after setting the flag whereas no 
> such log exists. The actual log line indicating that procedure store is empty 
> is "UPGRADE OK: All existed procedures have been finished, quit..."



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5155: HBASE-27536: add Scan to slow log payload

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5155:
URL: https://github.com/apache/hbase/pull/5155#issuecomment-1499484155

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 54s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 37s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   4m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 48s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 36s |  the patch passed  |
   | +1 :green_heart: |  spotless  |   0m 38s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   5m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  51m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5155 |
   | JIRA Issue | HBASE-27536 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux fb36736d5ee2 5.4.0-1093-aws #102~18.04.2-Ubuntu SMP Wed Dec 
7 00:31:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-common hbase-client hbase-server 
U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5155/6/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5161: HBASE-27772 Add 2.4.17 to download page

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5161:
URL: https://github.com/apache/hbase/pull/5161#issuecomment-1499460376

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 24s |  master passed  |
   | +1 :green_heart: |  mvnsite  |   6m 43s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 52s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   7m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  spotless  |   0m 57s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 20s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  27m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5161 |
   | Optional Tests | dupname asflicense mvnsite spotless xml |
   | uname | Linux 37e9015b8b66 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5161: HBASE-27772 Add 2.4.17 to download page

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5161:
URL: https://github.com/apache/hbase/pull/5161#issuecomment-1499433918

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 59s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5161 |
   | Optional Tests |  |
   | uname | Linux 598f7659e239 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Max. process+thread count | 42 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5161: HBASE-27772 Add 2.4.17 to download page

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5161:
URL: https://github.com/apache/hbase/pull/5161#issuecomment-1499433215

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 53s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5161 |
   | Optional Tests |  |
   | uname | Linux d6ec4479a12a 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a370099aaa |
   | Max. process+thread count | 33 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5161/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Work started] (HBASE-27772) Add 2.4.17 to download page

2023-04-06 Thread Tak-Lon (Stephen) Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27772 started by Tak-Lon (Stephen) Wu.

> Add 2.4.17 to download page
> ---
>
> Key: HBASE-27772
> URL: https://issues.apache.org/jira/browse/HBASE-27772
> Project: HBase
>  Issue Type: Sub-task
>  Components: website
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>
> need a PR to add rel/2.4.17 to download page.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709435#comment-17709435
 ] 

Bryan Beaudreault commented on HBASE-27782:
---

The line numbers may not be perfect because this is backported into our fork, 
which is based on 2.5.2. Looks like we are using hbase-thirdparty 4.1.3.  Let 
me update that to 4.1.4 and see if it helps the line numbers add up.

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #5157: HBASE-27775 Use a separate WAL provider for hbase:replication table

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5157:
URL: https://github.com/apache/hbase/pull/5157#issuecomment-1499259745

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27109/table_based_rqs Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 34s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  shadedjars  |   4m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  HBASE-27109/table_based_rqs 
passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 28s |  hbase-replication in the patch 
passed.  |
   | -1 :x: |  unit  | 235m 26s |  hbase-server in the patch failed.  |
   |  |   | 261m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5157 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ac3673b3d7e1 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27109/table_based_rqs / f78fe5994a |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/testReport/
 |
   | Max. process+thread count | 2859 (vs. ulimit of 3) |
   | modules | C: hbase-replication hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709434#comment-17709434
 ] 

Duo Zhang commented on HBASE-27782:
---

And the line number seems a bit strange, on branch-2 we depend on 
hbase-thirdparty-4.1.4, which is netty 4.1.86.Final, the line 280 of 
ByteToMessageDecoder is a blank line...

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>         at 
> 

[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709432#comment-17709432
 ] 

Duo Zhang commented on HBASE-27782:
---

A bit strange. This is a handshade error, in NettyRpcConnection, we will 
register a listener to deal with the future. And in netty's code

{code}
try {
final int bytesConsumed = unwrap(ctx, in, packetLength);
assert bytesConsumed == packetLength || engine.isInboundDone() :
"we feed the SSLEngine a packets worth of data: " + 
packetLength + " but it only consumed: " +
bytesConsumed;
} catch (Throwable cause) {
handleUnwrapThrowable(ctx, cause);
}
{code}

We will catch throwable and call handleUnwrapThrowable, so it is not likely 
that we will throw the exception out to the pipeline layer...

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> 

[jira] [Resolved] (HBASE-27771) Put up 2.4.17RC0

2023-04-06 Thread Tak-Lon (Stephen) Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak-Lon (Stephen) Wu resolved HBASE-27771.
--
Resolution: Fixed

completed the RC vote

> Put up 2.4.17RC0 
> -
>
> Key: HBASE-27771
> URL: https://issues.apache.org/jira/browse/HBASE-27771
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>
> send out vote for 2.4.17RC0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27782:
--
Labels: TLS  (was: )

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>  Labels: TLS
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236)
>         

[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709399#comment-17709399
 ] 

Bryan Beaudreault commented on HBASE-27782:
---

To clarify, I’m just testing client side right now. It may exist on server side 
too, but I don’t know. Currently we have haproxy terminate ssl on the server 
side so hbase TLS is not enabled there yet. We’ll eventually test server side 
too. 

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)

[jira] [Updated] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-27778:
-
Affects Version/s: 2.5.4
   2.4.16

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3, 2.4.16, 2.5.4
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709395#comment-17709395
 ] 

Bryan Beaudreault commented on HBASE-27782:
---

Thanks for looking, I was probably going to eventually ask your advice:)

 

client side

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
>         at 
> 

[jira] [Commented] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709394#comment-17709394
 ] 

chenglei commented on HBASE-27778:
--

The problem also exists in 2.5 and 2.4, I would open new PRs for 2.5 and 2.4.

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709391#comment-17709391
 ] 

Duo Zhang commented on HBASE-27782:
---

This is on server side or client side?

> During SSL handshake error, netty complains that exceptionCaught() was not 
> handled
> --
>
> Key: HBASE-27782
> URL: https://issues.apache.org/jira/browse/HBASE-27782
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Priority: Major
>
> I was chaos testing the new native TLS, forcing a certificate to expire and 
> fail handshake. The handshake failure properly causes submitted requests to 
> fail, but I see the following "unhandled exception" like message:
> {code:java}
> WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
> fired, and it reached at the tail of the pipeline. It usually means the last 
> handler in the pipeline did>
> org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
> certificate_expired
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>         at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>         at 
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
>         at 
> java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
>         at 
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
>         at 
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
>         at 
> java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
>         at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>         at 
> org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
>         at 
> 

[jira] [Comment Edited] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709387#comment-17709387
 ] 

chenglei edited comment on HBASE-27778 at 4/6/23 1:53 PM:
--

Pushed to 2.6+, thanks [~zhangduo] and [~Xiaolin Ha] for reviewing.


was (Author: comnetwork):
Pushed to 2.6+, thanks [~zhangduo] and [~Xiaolin Ha] for review.

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709387#comment-17709387
 ] 

chenglei edited comment on HBASE-27778 at 4/6/23 1:52 PM:
--

Pushed to 2.6+, thanks [~zhangduo] and [~Xiaolin Ha] for review.


was (Author: comnetwork):
Pushed to 2.6+, thanks [~zhangduo] and [~Xiaolin Ha]

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709387#comment-17709387
 ] 

chenglei commented on HBASE-27778:
--

Pushed to 2.6+, thanks [~zhangduo] and [~Xiaolin Ha]

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5158:
URL: https://github.com/apache/hbase/pull/5158#issuecomment-1499090445

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 23s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 14s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 213m 13s |  hbase-server in the patch passed.  
|
   |  |   | 236m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5158 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 38fe32e33585 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f27823e62d |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/testReport/
 |
   | Max. process+thread count | 2423 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5158:
URL: https://github.com/apache/hbase/pull/5158#issuecomment-1499091003

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 54s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 55s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 212m 49s |  hbase-server in the patch passed.  
|
   |  |   | 236m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5158 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 85f7862fbc60 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 
13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f27823e62d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/testReport/
 |
   | Max. process+thread count | 2536 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27778) Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up

2023-04-06 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-27778:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Incorrect  ReplicationSourceWALReader. totalBufferUsed may cause replication 
> hang up
> 
>
> Key: HBASE-27778
> URL: https://issues.apache.org/jira/browse/HBASE-27778
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.6.0, 3.0.0-alpha-3
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> When we read a new WAL Entry in 
> {{ReplicationSourceWALReader.readWALEntries}}, we add 
> {{ReplicationSourceWALReader.totalBufferUsed}} by the size of new entry in   
> {{ReplicationSourceWALReader.addEntryToBatch}}, but the whole 
> {{WALEntryBatch}} may not be put to the 
> {{ReplicationSourceWALReader.entryBatchQueue}} because of exception(eg. 
> exception thrown by {{WALEntryFilter.filter}} for following WAL Entry), and 
> the {{ReplicationSourceWALReader.totalBufferUsed}} is not decreased in this 
> case. Because the  {{ReplicationSourceWALReader.totalBufferUsed}}  is 
> actually scoped to {{ReplicationSourceManager}}, after a long run, 
> replication to all peers may hang up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] comnetwork merged pull request #5160: HBASE-27778 Incorrect ReplicationSourceWALReader. totalBufferUsed may…

2023-04-06 Thread via GitHub


comnetwork merged PR #5160:
URL: https://github.com/apache/hbase/pull/5160


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork opened a new pull request, #5160: HBASE-27778 Incorrect ReplicationSourceWALReader. totalBufferUsed may…

2023-04-06 Thread via GitHub


comnetwork opened a new pull request, #5160:
URL: https://github.com/apache/hbase/pull/5160

   … cause replication hang up


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork merged pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


comnetwork merged PR #5158:
URL: https://github.com/apache/hbase/pull/5158


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork commented on a diff in pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


comnetwork commented on code in PR #5158:
URL: https://github.com/apache/hbase/pull/5158#discussion_r1159777996


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryBatch.java:
##
@@ -52,6 +52,9 @@ class WALEntryBatch {
   private Map lastSeqIds = new HashMap<>();
   // indicate that this is the end of the current file
   private boolean endOfFile;
+  // indicate the buffer size used, which is added to
+  // ReplicationSourceWALReader.totalBufferUsed
+  private long usedBufferSize;

Review Comment:
   @Apache9 ,yes,  I plan to open a new PR to centralize the totalBufferUsed 
related code and eliminate duplicate code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork commented on a diff in pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


comnetwork commented on code in PR #5158:
URL: https://github.com/apache/hbase/pull/5158#discussion_r1159777996


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryBatch.java:
##
@@ -52,6 +52,9 @@ class WALEntryBatch {
   private Map lastSeqIds = new HashMap<>();
   // indicate that this is the end of the current file
   private boolean endOfFile;
+  // indicate the buffer size used, which is added to
+  // ReplicationSourceWALReader.totalBufferUsed
+  private long usedBufferSize;

Review Comment:
   @Apache9 ,yes,  I plan to open a new PR to centralize the totalBufferUsed 
related code and eliminate duplicate code 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-27782) During SSL handshake error, netty complains that exceptionCaught() was not handled

2023-04-06 Thread Bryan Beaudreault (Jira)
Bryan Beaudreault created HBASE-27782:
-

 Summary: During SSL handshake error, netty complains that 
exceptionCaught() was not handled
 Key: HBASE-27782
 URL: https://issues.apache.org/jira/browse/HBASE-27782
 Project: HBase
  Issue Type: Bug
Reporter: Bryan Beaudreault


I was chaos testing the new native TLS, forcing a certificate to expire and 
fail handshake. The handshake failure properly causes submitted requests to 
fail, but I see the following "unhandled exception" like message:
{code:java}
WARN  o.a.h.t.i.n.c.DefaultChannelPipeline - An exceptionCaught() event was 
fired, and it reached at the tail of the pipeline. It usually means the last 
handler in the pipeline did>
org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: 
javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_expired
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
        at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
        at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
        at 
org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
        at 
org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: 
certificate_expired
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
        at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:358)
        at 
java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
        at 
java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at 
java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)
        at 
java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)
        at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)
        at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236)
        at 
org.apache.hbase.thirdparty.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519)
        at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458)
        ... 17 common frames omitted {code}



--
This message 

[GitHub] [hbase] rmdmattingly commented on a diff in pull request #5155: HBASE-27536: add Scan to slow log payload

2023-04-06 Thread via GitHub


rmdmattingly commented on code in PR #5155:
URL: https://github.com/apache/hbase/pull/5155#discussion_r1159755698


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/OnlineLogRecord.java:
##
@@ -52,6 +59,15 @@ final public class OnlineLogRecord extends LogEntry {
 if (slowLogPayload.getMultiServiceCalls() == 0) {
   jsonObj.remove("multiServiceCalls");
 }
+if (slowLogPayload.getScan().isPresent()) {
+  try {
+jsonObj.add("scan", 
JsonParser.parseString(slowLogPayload.getScan().get().toJSON()));
+  } catch (IOException e) {
+LOG.warn("Failed to serialize scan {}", 
slowLogPayload.getScan().get(), e);

Review Comment:
   I believe it would manifest as the toString representation. For example, 
this:
   ```java
   import org.apache.hadoop.hbase.client.Scan;
   
   public static void main(String[] args) {
 Scan scan = new Scan();
 scan.withStartRow(Bytes.toBytes("1234"));
 LOG.info("Failed to serialize scan {}", scan);
   }
   ```
   produces this:
   ```
   Failed to serialize scan 
{"startRow":"1234","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":0,"maxResultSize":"-1","families":{},"caching":-1,"maxVersions":1,"timeRange":["0","9223372036854775807"]}
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5157: HBASE-27775 Use a separate WAL provider for hbase:replication table

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5157:
URL: https://github.com/apache/hbase/pull/5157#issuecomment-1498968862

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-27109/table_based_rqs Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 46s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  spotless  |   0m 58s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  HBASE-27109/table_based_rqs 
passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 15s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  22m  6s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   1m  6s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5157 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 034e447686f5 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 
13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27109/table_based_rqs / f78fe5994a |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-replication hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5157/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #4979: HBASE-27574 Implement ClusterManager interface for Kubernetes

2023-04-06 Thread via GitHub


bbeaudreault commented on PR #4979:
URL: https://github.com/apache/hbase/pull/4979#issuecomment-1498948882

   > And I'm a bit interest on how do you guys manage datanode or namenode on 
K8s? They have local storage, so if you delete the pod and launch a new one at 
other places, the data will be lost
   
   We currently don't run DataNodes in k8s. For namenodes we use StatefulSet 
with EBS backed. 
   
   For DataNodes we don't want to use EBS, too expensive. When we eventually 
get to then we plan to use FlexVolumes to basically provision space on 
particular SSD-backed kube nodes. So if pod restarts, it would go to the same 
node if it's available. If not, it would go elsewhere and lose its data but 
this is how things work outside k8s and is handled by hdfs replication. Sadly 
can't give more details than this right now because it's been on hold for a 
while so we can work on other things. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #4979: HBASE-27574 Implement ClusterManager interface for Kubernetes

2023-04-06 Thread via GitHub


bbeaudreault commented on PR #4979:
URL: https://github.com/apache/hbase/pull/4979#issuecomment-1498942952

   Personally I prefer to use Exec API for this. It seems somewhat artificial 
to try reducing the pod count just for the sake of it. 
   
   IMO chaos monkey is for testing both hbase handling and deployment 
automation. Outside k8s, if you stop a regionserver process you better have 
monit or sysctl to start it back up. In kubernetes, this is handled for you. 
   
   So if chaos sends a kill 9, it's doing a good job of testing hire both 
systems handle a regionserver dying. Maybe in kubernetes you have an init 
container which gets in the way of the pod gracefully having a regionserver 
container dying. Chaos would expose that. 
   
   Otherwise I think kill -stop is an important feature and I wouldn't want to 
bury it in an option. So that's another reason just replacing ssh with Exec api 
would be nice. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4979: HBASE-27574 Implement ClusterManager interface for Kubernetes

2023-04-06 Thread via GitHub


Apache9 commented on PR #4979:
URL: https://github.com/apache/hbase/pull/4979#issuecomment-1498855379

   And I'm a bit interest on how do you guys manage datanode or namenode on 
K8s? They have local storage, so if you delete the pod and launch a new one at 
other places, the data will be lost...
   
   Use stateful set?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4979: HBASE-27574 Implement ClusterManager interface for Kubernetes

2023-04-06 Thread via GitHub


Apache9 commented on PR #4979:
URL: https://github.com/apache/hbase/pull/4979#issuecomment-1498851807

   > 
   
   Checked the API description
   
   
https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CoreV1Api.md#deleteNamespacedPod
   
   The deleteNamespacedPod method has a gracePeriodSeconds, 0 means delete 
immediately, so I think it could archive what we want.
   
   But what I concern more is about how to correctly support stop and kill, as 
in K8s, if you do not shift the replica count, the framework will launch a new 
pod right after you delete a pod...
   
   I think this is exactly what we want, but seems still not fully implemented 
yet...
   https://github.com/kubernetes/kubernetes/issues/45509
   
   And we also need to change some semantics for the cluster manager. For 
example, on K8s, it is useless to specify a hostname when starting a new region 
server, so maybe we could change the API to "startNewRegionServer", as even for 
non k8s environment, I do not think we must start a region server on a given 
host, we just need to start a new one, right?
   
   And for stop, kill, restart, maybe we could also change the semantice so it 
would fit both k8s and non k8s environment. For example, we just remove stop 
and kill, only leave restart there, but we provide a flag to indicate how to 
stop the region server, i.e, a graceful shutdown, or a force kill. And we 
provide another api called reduceRegionServerNumber. For K8s environment, it is 
just a API call, and for non k8s environment, we can randomly select a region 
server to stop. This is not perfect but I think it could fit most of our test 
scenarios.
   
   What do you guys think?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


Apache-HBase commented on PR #5158:
URL: https://github.com/apache/hbase/pull/5158#issuecomment-1498845562

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 14s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 40s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 12s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5158 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 4c45becb68de 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f27823e62d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 81 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5158/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #5121: HBASE-27733

2023-04-06 Thread via GitHub


Apache9 commented on PR #5121:
URL: https://github.com/apache/hbase/pull/5121#issuecomment-1498806605

   @wchevreuil Any other concerns to block us merge this PR?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


Apache9 commented on code in PR #5158:
URL: https://github.com/apache/hbase/pull/5158#discussion_r1159573079


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryBatch.java:
##
@@ -52,6 +52,9 @@ class WALEntryBatch {
   private Map lastSeqIds = new HashMap<>();
   // indicate that this is the end of the current file
   private boolean endOfFile;
+  // indicate the buffer size used, which is added to
+  // ReplicationSourceWALReader.totalBufferUsed
+  private long usedBufferSize;

Review Comment:
   Since we have recorded the size here, I think we could use it directly in 
many places(especially in ReplicationSourceShipper), so we do not need to 
calculate the size of the WALEntryBatch again?
   
   Anyway, can be a follow on issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork commented on a diff in pull request #5158: HBASE-27778 Incorrect ReplicationSourceWALReader.totalBufferUsed may …

2023-04-06 Thread via GitHub


comnetwork commented on code in PR #5158:
URL: https://github.com/apache/hbase/pull/5158#discussion_r1159563458


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java:
##
@@ -145,43 +145,54 @@ public void run() {
 source.getWALFileLengthProvider(), source.getSourceMetrics(), 
walGroupId)) {
 while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
   batch = null;
-  if (!source.isPeerEnabled()) {
-Threads.sleep(sleepForRetries);
-continue;
+  boolean successAddToQueue = false;
+  try {
+if (!source.isPeerEnabled()) {
+  Threads.sleep(sleepForRetries);
+  continue;
+}
+if (!checkQuota()) {
+  continue;
+}
+Path currentPath = entryStream.getCurrentPath();
+WALEntryStream.HasNext hasNext = entryStream.hasNext();
+if (hasNext == WALEntryStream.HasNext.NO) {
+  replicationDone();
+  return;
+}
+// first, check if we have switched a file, if so, we need to 
manually add an EOF entry
+// batch to the queue
+if (currentPath != null && switched(entryStream, currentPath)) {
+  entryBatchQueue.put(WALEntryBatch.endOfFile(currentPath));
+  continue;
+}
+if (hasNext == WALEntryStream.HasNext.RETRY) {
+  // sleep and retry
+  sleepMultiplier = sleep(sleepMultiplier);
+  continue;
+}
+if (hasNext == WALEntryStream.HasNext.RETRY_IMMEDIATELY) {
+  // retry immediately, this usually means we have switched a file
+  continue;
+}
+// below are all for hasNext == YES
+batch = createBatch(entryStream);
+readWALEntries(entryStream, batch);

Review Comment:
   @Apache9 , thank you very much for point out it. I have narrowed the scope 
and also found that the declaration of 'batch' variable could be moved into the 
body of loop.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27109) Move replication queue storage from zookeeper to a separated HBase table

2023-04-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709245#comment-17709245
 ] 

Hudson commented on HBASE-27109:


Results for branch HBASE-27109/table_based_rqs
[build #57 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/HBASE-27109%252Ftable_based_rqs/57/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/HBASE-27109%252Ftable_based_rqs/57/General_20Nightly_20Build_20Report/]




(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/HBASE-27109%252Ftable_based_rqs/57/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/HBASE-27109%252Ftable_based_rqs/57/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Move replication queue storage from zookeeper to a separated HBase table
> 
>
> Key: HBASE-27109
> URL: https://issues.apache.org/jira/browse/HBASE-27109
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> This is a more specific issue based on the works which are already done in 
> HBASE-15867.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)