[jira] [Commented] (HBASE-27251) Rolling back from 2.5.0-SNAPSHOT to 2.4.13 fails due to `File does not exist: /hbase/MasterData/data/master/store/.initialized/.regioninfo`
[ https://issues.apache.org/jira/browse/HBASE-27251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17574490#comment-17574490 ] Hudson commented on HBASE-27251: Results for branch branch-2.4 [build #401 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/401/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/401/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/401/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/401/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/401/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Rolling back from 2.5.0-SNAPSHOT to 2.4.13 fails due to `File does not exist: > /hbase/MasterData/data/master/store/.initialized/.regioninfo` > --- > > Key: HBASE-27251 > URL: https://issues.apache.org/jira/browse/HBASE-27251 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.5.0 >Reporter: Nick Dimiduk >Assignee: Huaxiang Sun >Priority: Critical > Fix For: 2.4.14 > > > I was doing some perf testing with builds of 2.5.0. I rolled back to 2.4.13 > and the master won't start. Stack trace ends in, > {noformat} > java.io.FileNotFoundException: File does not exist: > /hbase/MasterData/data/master/store/.initialized/.regioninfo > > at > org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:86) > >at > org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:156) > >at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2089) > > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:762) > > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:458) > > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604) > > > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572) > > > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093) > > > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043) > > > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971) > at > java.base/java.security.AccessController.doPrivileged(AccessController.java:712) >
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId/startTime and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: *Question:* In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints callId (so we can associate the corresponding request on the server) and startTime about the request,and the server prints receiveTime about the request. In this way, we can find out at what stage the problem occurs. The client can even wrap the startTime in the request, and the server can directly collect the time at the network level (netty framework process time and network transmission time and so on) which make it easy to monitor some network level metrics about the client requests. was: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints callId (so we can associate the corresponding request on the server) and startTime about the request,and the server prints receiveTime about the request. In this way, we can find out at what stage the problem occurs. The client can even wrap the startTime in the request, and the server can directly collect the time at the network level (netty framework process time and network transmission time and so on) for some network level trace monitoring purposes. > In trace log mode, the client does not print callId/startTime and the server > does not print receiveTime > --- > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > *Question:* > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the client prints callId (so we can associate the > corresponding request on the server) and startTime about the request,and the > server prints receiveTime about the request. In this way, we can find out at > what stage the problem occurs. > > The client can even wrap the startTime in the request, and the server can > directly collect the time at the network level (netty framework process time > and network transmission time and so on) which make it easy to monitor some > network level metrics about the client requests. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId/startTime and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints callId (so we can associate the corresponding request on the server) and startTime about the request,and the server prints receiveTime about the request. In this way, we can find out at what stage the problem occurs. The client can even wrap the startTime in the request, and the server can directly collect the time at the network level (netty framework process time and network transmission time and so on) for some network level trace monitoring purposes. was: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints callId (so we can associate the corresponding request on the server) and startTime about the request,and the server prints receiveTime about the request. In this way, we can find out at what stage the problem occurs > In trace log mode, the client does not print callId/startTime and the server > does not print receiveTime > --- > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > Question: > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the client prints callId (so we can associate the > corresponding request on the server) and startTime about the request,and the > server prints receiveTime about the request. In this way, we can find out at > what stage the problem occurs. > > The client can even wrap the startTime in the request, and the server can > directly collect the time at the network level (netty framework process time > and network transmission time and so on) for some network level trace > monitoring purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId/startTime and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints callId (so we can associate the corresponding request on the server) and startTime about the request,and the server prints receiveTime about the request. In this way, we can find out at what stage the problem occurs was: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints > In trace log mode, the client does not print callId/startTime and the server > does not print receiveTime > --- > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > Question: > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the client prints callId (so we can associate the > corresponding request on the server) and startTime about the request,and the > server prints receiveTime about the request. In this way, we can find out at > what stage the problem occurs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1203432506 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 9s | master passed | | +1 :green_heart: | compile | 1m 24s | master passed | | +1 :green_heart: | shadedjars | 3m 55s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 39s | master passed | | -0 :warning: | patch | 5m 45s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 6s | the patch passed | | +1 :green_heart: | compile | 1m 26s | the patch passed | | +1 :green_heart: | javac | 1m 26s | the patch passed | | +1 :green_heart: | shadedjars | 3m 54s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 387m 13s | root in the patch passed. | | | | 408m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 21e27f2a781a 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/testReport/ | | Max. process+thread count | 4805 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId/startTime and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the client prints was: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the > In trace log mode, the client does not print callId/startTime and the server > does not print receiveTime > --- > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > Question: > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the client prints -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId/startTime and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Summary: In trace log mode, the client does not print callId/startTime and the server does not print receiveTime (was: In trace log mode, the client does not print callId and the server does not print receiveTime) > In trace log mode, the client does not print callId/startTime and the server > does not print receiveTime > --- > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > Question: > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: Question: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. A possible solution is that the was:In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. > In trace log mode, the client does not print callId and the server does not > print receiveTime > - > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > Question: > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. > > A possible solution is that the -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Description: In some scenarios in the production environment, users find that the request latancy is high on the client side, but it is not high on the server side (all through the internal monitor system). In this inconsistent scenario, we want to know if there is a problem with the client or the current network. Therefore, we want to know the time when the server receives the request. However, in Trace mode, no relevant information is printed. > In trace log mode, the client does not print callId and the server does not > print receiveTime > - > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > > In some scenarios in the production environment, users find that the request > latancy is high on the client side, but it is not high on the server side > (all through the internal monitor system). In this inconsistent scenario, we > want to know if there is a problem with the client or the current network. > Therefore, we want to know the time when the server receives the request. > However, in Trace mode, no relevant information is printed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
Apache-HBase commented on PR #4666: URL: https://github.com/apache/hbase/pull/4666#issuecomment-1203422064 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 14s | master passed | | +1 :green_heart: | compile | 1m 31s | master passed | | +1 :green_heart: | shadedjars | 4m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 5s | master passed | | -0 :warning: | patch | 6m 27s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 6s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 4m 3s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 4s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 326m 38s | root in the patch failed. | | | | 348m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4666 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1fb16e4cbedb 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/testReport/ | | Max. process+thread count | 2192 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27268) In Trace log mode, the client does not print callId and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Summary: In Trace log mode, the client does not print callId and the server does not print receiveTime (was: In Trace log mode, the client does not print CallId and the server does not print receiveTime) > In Trace log mode, the client does not print callId and the server does not > print receiveTime > - > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27268) In trace log mode, the client does not print callId and the server does not print receiveTime
[ https://issues.apache.org/jira/browse/HBASE-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuyaogai updated HBASE-27268: -- Summary: In trace log mode, the client does not print callId and the server does not print receiveTime (was: In Trace log mode, the client does not print callId and the server does not print receiveTime) > In trace log mode, the client does not print callId and the server does not > print receiveTime > - > > Key: HBASE-27268 > URL: https://issues.apache.org/jira/browse/HBASE-27268 > Project: HBase > Issue Type: Improvement > Components: tracing >Affects Versions: 2.5.0, 3.0.0-alpha-3 >Reporter: zhuyaogai >Assignee: zhuyaogai >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27268) In Trace log mode, the client does not print CallId and the server does not print receiveTime
zhuyaogai created HBASE-27268: - Summary: In Trace log mode, the client does not print CallId and the server does not print receiveTime Key: HBASE-27268 URL: https://issues.apache.org/jira/browse/HBASE-27268 Project: HBase Issue Type: Improvement Components: tracing Affects Versions: 3.0.0-alpha-3, 2.5.0 Reporter: zhuyaogai Assignee: zhuyaogai -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
Apache-HBase commented on PR #4666: URL: https://github.com/apache/hbase/pull/4666#issuecomment-1203392181 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 21s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 16s | master passed | | +1 :green_heart: | compile | 2m 28s | master passed | | +1 :green_heart: | shadedjars | 4m 25s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 32s | master passed | | -0 :warning: | patch | 8m 23s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 11s | the patch passed | | +1 :green_heart: | compile | 2m 22s | the patch passed | | +1 :green_heart: | javac | 2m 22s | the patch passed | | +1 :green_heart: | shadedjars | 5m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 57s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 259m 5s | root in the patch failed. | | | | 291m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4666 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 760541af161e 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/testReport/ | | Max. process+thread count | 2481 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1203365580 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 9s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 33s | master passed | | +1 :green_heart: | compile | 1m 45s | master passed | | +1 :green_heart: | shadedjars | 3m 52s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 6s | master passed | | -0 :warning: | patch | 6m 10s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 9s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 24s | the patch passed | | +1 :green_heart: | compile | 1m 43s | the patch passed | | +1 :green_heart: | javac | 1m 43s | the patch passed | | +1 :green_heart: | shadedjars | 3m 49s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 255m 10s | root in the patch passed. | | | | 279m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a5a32e1558c3 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/testReport/ | | Max. process+thread count | 4748 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-24768) Clear cached service kerberos ticket in case of SASL failures thrown from server side
[ https://issues.apache.org/jira/browse/HBASE-24768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Jacoby reassigned HBASE-24768: --- Assignee: Sandeep Guggilam > Clear cached service kerberos ticket in case of SASL failures thrown from > server side > - > > Key: HBASE-24768 > URL: https://issues.apache.org/jira/browse/HBASE-24768 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Sandeep Guggilam >Priority: Major > Fix For: 1.7.0 > > > We setup a SASL connection using different mechanisms like Digest, Kerberos > from master to RS for various activities like region assignment etc. In case > of SASL connect failures, we try to dispose of the SaslRpcClient and try to > relogin from the keytab on the client side. However the relogin from keytab > method doesn't clear off the service ticket cached in memory unless TGT is > about to expire within a timeframe. > This actually causes an issue where there is a keytab refresh that happens > because of expiry on the RS server and throws a SASL connect error when > Master reaches out to the RS server with the cached service ticket that no > longer works with the new refreshed keytab. We might need to clear off the > service ticket cached as there could be a credential refresh on the RS server > side when handling connect failures -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-24768) Clear cached service kerberos ticket in case of SASL failures thrown from server side
[ https://issues.apache.org/jira/browse/HBASE-24768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Jacoby resolved HBASE-24768. - Fix Version/s: 1.7.0 Resolution: Fixed This JIRA was merged back in October 2020 and seems to have been included in 1.7.0, but wasn't resolved and didn't have a Fix Version. > Clear cached service kerberos ticket in case of SASL failures thrown from > server side > - > > Key: HBASE-24768 > URL: https://issues.apache.org/jira/browse/HBASE-24768 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Priority: Major > Fix For: 1.7.0 > > > We setup a SASL connection using different mechanisms like Digest, Kerberos > from master to RS for various activities like region assignment etc. In case > of SASL connect failures, we try to dispose of the SaslRpcClient and try to > relogin from the keytab on the client side. However the relogin from keytab > method doesn't clear off the service ticket cached in memory unless TGT is > about to expire within a timeframe. > This actually causes an issue where there is a keytab refresh that happens > because of expiry on the RS server and throws a SASL connect error when > Master reaches out to the RS server with the cached service ticket that no > longer works with the new refreshed keytab. We might need to clear off the > service ticket cached as there could be a credential refresh on the RS server > side when handling connect failures -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
Apache-HBase commented on PR #4666: URL: https://github.com/apache/hbase/pull/4666#issuecomment-1203253832 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 17s | master passed | | +1 :green_heart: | compile | 6m 9s | master passed | | +1 :green_heart: | checkstyle | 1m 0s | master passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 9m 42s | master passed | | -0 :warning: | patch | 7m 44s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | the patch passed | | +1 :green_heart: | compile | 6m 6s | the patch passed | | +1 :green_heart: | javac | 6m 6s | the patch passed | | -0 :warning: | checkstyle | 0m 58s | root: The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 11m 30s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | -1 :x: | spotless | 0m 14s | patch has 21 errors when running spotless:check, run spotless:apply to fix. | | +1 :green_heart: | spotbugs | 10m 19s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 39s | The patch does not generate ASF License warnings. | | | | 58m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4666 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile xml | | uname | Linux 62e96ec59925 5.4.0-122-generic #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-general-check/output/diff-checkstyle-root.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/artifact/yetus-general-check/output/patch-spotless.txt | | Max. process+thread count | 139 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/7/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1203236529 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 25s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | master passed | | +1 :green_heart: | compile | 8m 41s | master passed | | +1 :green_heart: | checkstyle | 1m 6s | master passed | | +1 :green_heart: | spotless | 0m 49s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 12m 12s | master passed | | -0 :warning: | patch | 1m 49s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 57s | the patch passed | | +1 :green_heart: | compile | 8m 33s | the patch passed | | -0 :warning: | javac | 8m 33s | root generated 1 new + 1066 unchanged - 1 fixed = 1067 total (was 1067) | | +1 :green_heart: | checkstyle | 1m 11s | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 14m 27s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | +1 :green_heart: | spotless | 0m 53s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 10m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 21s | The patch does not generate ASF License warnings. | | | | 73m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | dupname asflicense spotless shellcheck shelldocs javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4bb3575c1193 5.4.0-122-generic #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/artifact/yetus-general-check/output/diff-compile-javac-root.txt | | Max. process+thread count | 138 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/3/console | | versions | git=2.17.1 maven=3.6.3 shellcheck=0.4.6 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
Apache-HBase commented on PR #4666: URL: https://github.com/apache/hbase/pull/4666#issuecomment-1203206389 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 7s | master passed | | +1 :green_heart: | compile | 1m 28s | master passed | | +1 :green_heart: | shadedjars | 4m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 15s | master passed | | -0 :warning: | patch | 6m 41s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 8s | the patch passed | | +1 :green_heart: | compile | 1m 29s | the patch passed | | +1 :green_heart: | javac | 1m 29s | the patch passed | | +1 :green_heart: | shadedjars | 4m 0s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 1m 20s | root generated 3 new + 81 unchanged - 3 fixed = 84 total (was 84) | ||| _ Other Tests _ | | -1 :x: | unit | 328m 13s | root in the patch failed. | | | | 351m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4666 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a6559522ad3f 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javadoc | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-root.txt | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/testReport/ | | Max. process+thread count | 2461 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server hbase-it . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1203185347 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 29s | master passed | | +1 :green_heart: | compile | 0m 49s | master passed | | +1 :green_heart: | shadedjars | 3m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 38s | the patch passed | | +1 :green_heart: | compile | 0m 50s | the patch passed | | +1 :green_heart: | javac | 0m 50s | the patch passed | | +1 :green_heart: | shadedjars | 3m 44s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 204m 28s | hbase-server in the patch passed. | | | | 221m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7a7fd5073ed1 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/testReport/ | | Max. process+thread count | 2672 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1203183715 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 46s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 20s | master passed | | +1 :green_heart: | compile | 0m 34s | master passed | | +1 :green_heart: | shadedjars | 3m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 11s | the patch passed | | +1 :green_heart: | compile | 0m 35s | the patch passed | | +1 :green_heart: | javac | 0m 35s | the patch passed | | +1 :green_heart: | shadedjars | 4m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 202m 37s | hbase-server in the patch passed. | | | | 219m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f0e617a00992 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/testReport/ | | Max. process+thread count | 2644 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1203173992 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 13s | master passed | | +1 :green_heart: | compile | 1m 26s | master passed | | +1 :green_heart: | shadedjars | 3m 53s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 39s | master passed | | -0 :warning: | patch | 5m 44s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 13s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 3m 56s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 386m 5s | root in the patch passed. | | | | 408m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a35f3bf8873c 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/testReport/ | | Max. process+thread count | 4853 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
Apache-HBase commented on PR #4666: URL: https://github.com/apache/hbase/pull/4666#issuecomment-1203117440 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 35s | master passed | | +1 :green_heart: | compile | 1m 52s | master passed | | +1 :green_heart: | shadedjars | 3m 41s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 4s | master passed | | -0 :warning: | patch | 7m 22s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 35s | the patch passed | | +1 :green_heart: | compile | 1m 54s | the patch passed | | +1 :green_heart: | javac | 1m 54s | the patch passed | | +1 :green_heart: | shadedjars | 3m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 6s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 226m 36s | root in the patch failed. | | | | 253m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4666 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 36bba4dd5bb3 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/testReport/ | | Max. process+thread count | 2817 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server hbase-it . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4666/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935923001 ## hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/tls/X509Util.java: ## @@ -0,0 +1,394 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.crypto.tls; + +import java.io.BufferedInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.Files; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.Security; +import java.security.cert.PKIXBuilderParameters; +import java.security.cert.X509CertSelector; +import java.util.Arrays; +import java.util.Objects; +import java.util.concurrent.atomic.AtomicReference; +import javax.net.ssl.CertPathTrustManagerParameters; +import javax.net.ssl.KeyManager; +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLException; +import javax.net.ssl.TrustManager; +import javax.net.ssl.TrustManagerFactory; +import javax.net.ssl.X509ExtendedTrustManager; +import javax.net.ssl.X509KeyManager; +import javax.net.ssl.X509TrustManager; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.exceptions.X509Exception; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContext; +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContextBuilder; + +/** + * Utility code for X509 handling Default cipher suites: Performance testing done by Facebook + * engineers shows that on Intel x86_64 machines, Java9 performs better with GCM and Java8 performs + * better with CBC, so these seem like reasonable defaults. + * + * This file has been copied from the Apache ZooKeeper project. + * @see https://github.com/apache/zookeeper/blob/c74658d398cdc1d207aa296cb6e20de00faec03e/zookeeper-server/src/main/java/org/apache/zookeeper/common/X509Util.java;>Base + * revision + */ +@InterfaceAudience.Private +public class X509Util { + + private static final Logger LOG = LoggerFactory.getLogger(X509Util.class); + + // Config + static final String CONFIG_PREFIX = "hbase.rpc.tls."; + public static final String TLS_CONFIG_PROTOCOL = CONFIG_PREFIX + "protocol"; + static final String TLS_CONFIG_KEYSTORE_LOCATION = CONFIG_PREFIX + "keystore.location"; + static final String TLS_CONFIG_KEYSTORE_TYPE = CONFIG_PREFIX + "keystore.type"; + static final String TLS_CONFIG_KEYSTORE_PASSWORD = CONFIG_PREFIX + "keystore.password"; + static final String TLS_CONFIG_TRUSTSTORE_LOCATION = CONFIG_PREFIX + "truststore.location"; + static final String TLS_CONFIG_TRUSTSTORE_TYPE = CONFIG_PREFIX + "truststore.type"; + static final String TLS_CONFIG_TRUSTSTORE_PASSWORD = CONFIG_PREFIX + "truststore.password"; + public static final String TLS_CONFIG_CLR = CONFIG_PREFIX + "clr"; + public static final String TLS_CONFIG_OCSP = CONFIG_PREFIX + "ocsp"; + private static final String TLS_ENABLED_PROTOCOLS = CONFIG_PREFIX + "enabledProtocols"; + private static final String TLS_CIPHER_SUITES = CONFIG_PREFIX + "ciphersuites"; + + public static final String HBASE_CLIENT_NETTY_TLS_ENABLED = "hbase.client.netty.tls.enabled"; + public static final String HBASE_SERVER_NETTY_TLS_ENABLED = "hbase.server.netty.tls.enabled"; + + public static final String HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT = +"hbase.server.netty.tls.supportplaintext"; + + public static final String HBASE_CLIENT_NETTY_TLS_HANDSHAKETIMEOUT = +"hbase.client.netty.tls.handshaketimeout"; + public static final int DEFAULT_HANDSHAKE_DETECTION_TIMEOUT_MILLIS = 5000; + + public static final String DEFAULT_PROTOCOL = "TLSv1.2"; + + private static String[] getGCMCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" }; + } + + private static String[] getCBCCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935921511 ## hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/tls/X509Util.java: ## @@ -0,0 +1,394 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.crypto.tls; + +import java.io.BufferedInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.Files; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.Security; +import java.security.cert.PKIXBuilderParameters; +import java.security.cert.X509CertSelector; +import java.util.Arrays; +import java.util.Objects; +import java.util.concurrent.atomic.AtomicReference; +import javax.net.ssl.CertPathTrustManagerParameters; +import javax.net.ssl.KeyManager; +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLException; +import javax.net.ssl.TrustManager; +import javax.net.ssl.TrustManagerFactory; +import javax.net.ssl.X509ExtendedTrustManager; +import javax.net.ssl.X509KeyManager; +import javax.net.ssl.X509TrustManager; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.exceptions.X509Exception; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContext; +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContextBuilder; + +/** + * Utility code for X509 handling Default cipher suites: Performance testing done by Facebook + * engineers shows that on Intel x86_64 machines, Java9 performs better with GCM and Java8 performs + * better with CBC, so these seem like reasonable defaults. + * + * This file has been copied from the Apache ZooKeeper project. + * @see https://github.com/apache/zookeeper/blob/c74658d398cdc1d207aa296cb6e20de00faec03e/zookeeper-server/src/main/java/org/apache/zookeeper/common/X509Util.java;>Base + * revision + */ +@InterfaceAudience.Private +public class X509Util { + + private static final Logger LOG = LoggerFactory.getLogger(X509Util.class); + + // Config + static final String CONFIG_PREFIX = "hbase.rpc.tls."; + public static final String TLS_CONFIG_PROTOCOL = CONFIG_PREFIX + "protocol"; + static final String TLS_CONFIG_KEYSTORE_LOCATION = CONFIG_PREFIX + "keystore.location"; + static final String TLS_CONFIG_KEYSTORE_TYPE = CONFIG_PREFIX + "keystore.type"; + static final String TLS_CONFIG_KEYSTORE_PASSWORD = CONFIG_PREFIX + "keystore.password"; + static final String TLS_CONFIG_TRUSTSTORE_LOCATION = CONFIG_PREFIX + "truststore.location"; + static final String TLS_CONFIG_TRUSTSTORE_TYPE = CONFIG_PREFIX + "truststore.type"; + static final String TLS_CONFIG_TRUSTSTORE_PASSWORD = CONFIG_PREFIX + "truststore.password"; + public static final String TLS_CONFIG_CLR = CONFIG_PREFIX + "clr"; + public static final String TLS_CONFIG_OCSP = CONFIG_PREFIX + "ocsp"; + private static final String TLS_ENABLED_PROTOCOLS = CONFIG_PREFIX + "enabledProtocols"; + private static final String TLS_CIPHER_SUITES = CONFIG_PREFIX + "ciphersuites"; + + public static final String HBASE_CLIENT_NETTY_TLS_ENABLED = "hbase.client.netty.tls.enabled"; + public static final String HBASE_SERVER_NETTY_TLS_ENABLED = "hbase.server.netty.tls.enabled"; + + public static final String HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT = +"hbase.server.netty.tls.supportplaintext"; + + public static final String HBASE_CLIENT_NETTY_TLS_HANDSHAKETIMEOUT = +"hbase.client.netty.tls.handshaketimeout"; + public static final int DEFAULT_HANDSHAKE_DETECTION_TIMEOUT_MILLIS = 5000; + + public static final String DEFAULT_PROTOCOL = "TLSv1.2"; + + private static String[] getGCMCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" }; + } + + private static String[] getCBCCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935921511 ## hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/tls/X509Util.java: ## @@ -0,0 +1,394 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.crypto.tls; + +import java.io.BufferedInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.Files; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.Security; +import java.security.cert.PKIXBuilderParameters; +import java.security.cert.X509CertSelector; +import java.util.Arrays; +import java.util.Objects; +import java.util.concurrent.atomic.AtomicReference; +import javax.net.ssl.CertPathTrustManagerParameters; +import javax.net.ssl.KeyManager; +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLException; +import javax.net.ssl.TrustManager; +import javax.net.ssl.TrustManagerFactory; +import javax.net.ssl.X509ExtendedTrustManager; +import javax.net.ssl.X509KeyManager; +import javax.net.ssl.X509TrustManager; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.exceptions.X509Exception; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContext; +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContextBuilder; + +/** + * Utility code for X509 handling Default cipher suites: Performance testing done by Facebook + * engineers shows that on Intel x86_64 machines, Java9 performs better with GCM and Java8 performs + * better with CBC, so these seem like reasonable defaults. + * + * This file has been copied from the Apache ZooKeeper project. + * @see https://github.com/apache/zookeeper/blob/c74658d398cdc1d207aa296cb6e20de00faec03e/zookeeper-server/src/main/java/org/apache/zookeeper/common/X509Util.java;>Base + * revision + */ +@InterfaceAudience.Private +public class X509Util { + + private static final Logger LOG = LoggerFactory.getLogger(X509Util.class); + + // Config + static final String CONFIG_PREFIX = "hbase.rpc.tls."; + public static final String TLS_CONFIG_PROTOCOL = CONFIG_PREFIX + "protocol"; + static final String TLS_CONFIG_KEYSTORE_LOCATION = CONFIG_PREFIX + "keystore.location"; + static final String TLS_CONFIG_KEYSTORE_TYPE = CONFIG_PREFIX + "keystore.type"; + static final String TLS_CONFIG_KEYSTORE_PASSWORD = CONFIG_PREFIX + "keystore.password"; + static final String TLS_CONFIG_TRUSTSTORE_LOCATION = CONFIG_PREFIX + "truststore.location"; + static final String TLS_CONFIG_TRUSTSTORE_TYPE = CONFIG_PREFIX + "truststore.type"; + static final String TLS_CONFIG_TRUSTSTORE_PASSWORD = CONFIG_PREFIX + "truststore.password"; + public static final String TLS_CONFIG_CLR = CONFIG_PREFIX + "clr"; + public static final String TLS_CONFIG_OCSP = CONFIG_PREFIX + "ocsp"; + private static final String TLS_ENABLED_PROTOCOLS = CONFIG_PREFIX + "enabledProtocols"; + private static final String TLS_CIPHER_SUITES = CONFIG_PREFIX + "ciphersuites"; + + public static final String HBASE_CLIENT_NETTY_TLS_ENABLED = "hbase.client.netty.tls.enabled"; + public static final String HBASE_SERVER_NETTY_TLS_ENABLED = "hbase.server.netty.tls.enabled"; + + public static final String HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT = +"hbase.server.netty.tls.supportplaintext"; + + public static final String HBASE_CLIENT_NETTY_TLS_HANDSHAKETIMEOUT = +"hbase.client.netty.tls.handshaketimeout"; + public static final int DEFAULT_HANDSHAKE_DETECTION_TIMEOUT_MILLIS = 5000; + + public static final String DEFAULT_PROTOCOL = "TLSv1.2"; + + private static String[] getGCMCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" }; + } + + private static String[] getCBCCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935918622 ## hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/tls/X509Util.java: ## @@ -0,0 +1,346 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.crypto.tls; + +import java.io.BufferedInputStream; +import java.io.File; +import java.io.IOException; +import java.io.InputStream; +import java.nio.file.Files; +import java.security.GeneralSecurityException; +import java.security.KeyStore; +import java.security.Security; +import java.security.cert.PKIXBuilderParameters; +import java.security.cert.X509CertSelector; +import java.util.Arrays; +import java.util.Objects; +import javax.net.ssl.CertPathTrustManagerParameters; +import javax.net.ssl.KeyManager; +import javax.net.ssl.KeyManagerFactory; +import javax.net.ssl.SSLException; +import javax.net.ssl.TrustManager; +import javax.net.ssl.TrustManagerFactory; +import javax.net.ssl.X509ExtendedTrustManager; +import javax.net.ssl.X509KeyManager; +import javax.net.ssl.X509TrustManager; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.exceptions.KeyManagerException; +import org.apache.hadoop.hbase.exceptions.SSLContextException; +import org.apache.hadoop.hbase.exceptions.TrustManagerException; +import org.apache.hadoop.hbase.exceptions.X509Exception; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContext; +import org.apache.hbase.thirdparty.io.netty.handler.ssl.SslContextBuilder; + +/** + * Utility code for X509 handling Default cipher suites: Performance testing done by Facebook + * engineers shows that on Intel x86_64 machines, Java9 performs better with GCM and Java8 performs + * better with CBC, so these seem like reasonable defaults. + * + * This file has been copied from the Apache ZooKeeper project. + * @see https://github.com/apache/zookeeper/blob/c74658d398cdc1d207aa296cb6e20de00faec03e/zookeeper-server/src/main/java/org/apache/zookeeper/common/X509Util.java;>Base + * revision + */ +@InterfaceAudience.Private +public final class X509Util { + + private static final Logger LOG = LoggerFactory.getLogger(X509Util.class); + + // Config + static final String CONFIG_PREFIX = "hbase.rpc.tls."; + public static final String TLS_CONFIG_PROTOCOL = CONFIG_PREFIX + "protocol"; + static final String TLS_CONFIG_KEYSTORE_LOCATION = CONFIG_PREFIX + "keystore.location"; + static final String TLS_CONFIG_KEYSTORE_TYPE = CONFIG_PREFIX + "keystore.type"; + static final String TLS_CONFIG_KEYSTORE_PASSWORD = CONFIG_PREFIX + "keystore.password"; + static final String TLS_CONFIG_TRUSTSTORE_LOCATION = CONFIG_PREFIX + "truststore.location"; + static final String TLS_CONFIG_TRUSTSTORE_TYPE = CONFIG_PREFIX + "truststore.type"; + static final String TLS_CONFIG_TRUSTSTORE_PASSWORD = CONFIG_PREFIX + "truststore.password"; + public static final String TLS_CONFIG_CLR = CONFIG_PREFIX + "clr"; + public static final String TLS_CONFIG_OCSP = CONFIG_PREFIX + "ocsp"; + private static final String TLS_ENABLED_PROTOCOLS = CONFIG_PREFIX + "enabledProtocols"; + private static final String TLS_CIPHER_SUITES = CONFIG_PREFIX + "ciphersuites"; + + public static final String HBASE_CLIENT_NETTY_TLS_ENABLED = "hbase.client.netty.tls.enabled"; + public static final String HBASE_SERVER_NETTY_TLS_ENABLED = "hbase.server.netty.tls.enabled"; + + public static final String HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT = +"hbase.server.netty.tls.supportplaintext"; + + public static final String HBASE_CLIENT_NETTY_TLS_HANDSHAKETIMEOUT = +"hbase.client.netty.tls.handshaketimeout"; + public static final int DEFAULT_HANDSHAKE_DETECTION_TIMEOUT_MILLIS = 5000; + + public static final String DEFAULT_PROTOCOL = "TLSv1.2"; + + private static String[] getGCMCiphers() { +return new String[] { "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", + "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", + "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" }; + } + + private static
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935910287 ## hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java: ## @@ -214,4 +227,29 @@ public int getNumOpenConnections() { // allChannels also contains the server channel, so exclude that from the count. return channelsCount > 0 ? channelsCount - 1 : channelsCount; } + + private void initSSL(ChannelPipeline p, boolean supportPlaintext) +throws X509Exception, SSLException { +SslContext nettySslContext = getSslContext(); + +if (supportPlaintext) { + p.addLast("ssl", new OptionalSslHandler(nettySslContext)); + LOG.debug("Dual mode SSL handler added for channel: {}", p.channel()); +} else { + p.addLast("ssl", nettySslContext.newHandler(p.channel().alloc())); + LOG.debug("SSL handler added for channel: {}", p.channel()); +} + } + + private SslContext getSslContext() throws X509Exception, SSLException { +SslContext result = sslContextForServer.get(); +if (result == null) { + result = X509Util.createSslContextForServer(conf); + if (!sslContextForServer.compareAndSet(null, result)) { +// lost the race, another thread already set the value +result = sslContextForServer.get(); + } +} +return result; Review Comment: This is done. I removed the entire lazy-init logic from the server. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935909549 ## hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java: ## @@ -106,6 +116,9 @@ protected void initChannel(Channel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); FixedLengthFrameDecoder preambleDecoder = new FixedLengthFrameDecoder(6); preambleDecoder.setSingleDecode(true); + if (conf.getBoolean(HBASE_SERVER_NETTY_TLS_ENABLED, false)) { +initSSL(pipeline, conf.getBoolean(HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT, true)); Review Comment: I was trying to do my best and create a new test case for it. Please check if you have a better approach in your mind. ## hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java: ## @@ -106,6 +116,9 @@ protected void initChannel(Channel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); FixedLengthFrameDecoder preambleDecoder = new FixedLengthFrameDecoder(6); preambleDecoder.setSingleDecode(true); + if (conf.getBoolean(HBASE_SERVER_NETTY_TLS_ENABLED, false)) { +initSSL(pipeline, conf.getBoolean(HBASE_SERVER_NETTY_TLS_SUPPORTPLAINTEXT, true)); Review Comment: I was trying to do my best and created a new test case for it. Please check if you have a better approach in your mind. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935908317 ## hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestNettyRpcConnection.java: ## @@ -57,7 +58,7 @@ public class TestNettyRpcConnection { private static NettyRpcConnection CONN; @BeforeClass - public static void setUp() throws IOException { + public static void setUp() throws IOException, SSLContextException { Review Comment: I reverted the entire file to master. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] anmolnar commented on a diff in pull request #4666: HBASE-26666 Add native TLS encryption support to RPC server/client
anmolnar commented on code in PR #4666: URL: https://github.com/apache/hbase/pull/4666#discussion_r935908527 ## hbase-it/src/test/java/org/apache/hadoop/hbase/ipc/IntegrationTestRpcClient.java: ## @@ -66,7 +67,8 @@ public IntegrationTestRpcClient() { conf = HBaseConfiguration.create(); } - protected AbstractRpcClient createRpcClient(Configuration conf, boolean isSyncClient) { + protected AbstractRpcClient createRpcClient(Configuration conf, boolean isSyncClient) Review Comment: This one too. ## hbase-it/src/test/java/org/apache/hadoop/hbase/ipc/IntegrationTestRpcClient.java: ## @@ -290,7 +292,8 @@ void rethrowException() throws Throwable { * is closing. */ @Test - public void testRpcWithWriteThread() throws IOException, InterruptedException { + public void testRpcWithWriteThread() +throws IOException, InterruptedException, SSLContextException { Review Comment: Same here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1203056798 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 41s | master passed | | +1 :green_heart: | compile | 1m 43s | master passed | | +1 :green_heart: | shadedjars | 3m 51s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 8s | master passed | | -0 :warning: | patch | 6m 10s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 26s | the patch passed | | +1 :green_heart: | compile | 1m 44s | the patch passed | | +1 :green_heart: | javac | 1m 44s | the patch passed | | +1 :green_heart: | shadedjars | 3m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 9s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 254m 16s | root in the patch passed. | | | | 278m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5a7a49b739be 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/testReport/ | | Max. process+thread count | 4876 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1203011721 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 22s | master passed | | +1 :green_heart: | compile | 2m 13s | master passed | | +1 :green_heart: | checkstyle | 0m 31s | master passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 22s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 20s | the patch passed | | +1 :green_heart: | compile | 2m 12s | the patch passed | | +1 :green_heart: | javac | 2m 12s | the patch passed | | +1 :green_heart: | checkstyle | 0m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 31s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | +1 :green_heart: | spotless | 0m 44s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 22s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 32m 9s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 44bb01a13bd5 5.4.0-122-generic #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 64 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1202876021 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 1s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 49s | master passed | | +1 :green_heart: | compile | 0m 47s | master passed | | +1 :green_heart: | shadedjars | 3m 48s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 28s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 38s | the patch passed | | +1 :green_heart: | compile | 0m 46s | the patch passed | | +1 :green_heart: | javac | 0m 46s | the patch passed | | +1 :green_heart: | shadedjars | 3m 45s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 206m 18s | hbase-server in the patch failed. | | | | 229m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0e15a50216e9 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/testReport/ | | Max. process+thread count | 2688 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1202866509 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 30s | master passed | | +1 :green_heart: | compile | 0m 35s | master passed | | +1 :green_heart: | shadedjars | 4m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 14s | the patch passed | | +1 :green_heart: | compile | 0m 35s | the patch passed | | +1 :green_heart: | javac | 0m 35s | the patch passed | | +1 :green_heart: | shadedjars | 3m 59s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 205m 7s | hbase-server in the patch failed. | | | | 225m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux beda996b50e9 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/testReport/ | | Max. process+thread count | 2680 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more {code} Debug WAL file ,found that the delete operation is caused Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at write timestamp=Sat Jul 16 00:50:01 CST 2022 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete {code} was: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more {code} debug WAL file ,found that the delete operation is caused Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at write timestamp=Sat Jul 16 00:50:01 CST 2022 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete was: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more {code} Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at write timestamp=Sat Jul 16 00:50:01 CST 2022 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete {code} was: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more {code} debug WAL file ,found that the operation is caused Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at write timestamp=Sat Jul 16 00:50:01 CST 2022 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, type=Delete was: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at
[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth
[ https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17574265#comment-17574265 ] Peter Somogyi commented on HBASE-23330: --- This change adds getClusterId to the Connection interface which is annotated with IA.Public. This is breaking downstream implementations, including hbase-connectors builds with 2.4.14-SNAPSHOT. Would it be possible to add a default implementation? > Expose cluster ID for clients using it for delegation token based auth > > > Key: HBASE-23330 > URL: https://issues.apache.org/jira/browse/HBASE-23330 > Project: HBase > Issue Type: Sub-task > Components: Client, master >Affects Versions: 3.0.0-alpha-1 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.14 > > > As Gary Helming noted in HBASE-18095, some clients use Cluster ID for > delgation based auth. > {quote} > There is an additional complication here for token-based authentication. When > a delegation token is used for SASL authentication, the client uses the > cluster ID obtained from Zookeeper to select the token identifier to use. So > there would also need to be some Zookeeper-less, unauthenticated way to > obtain the cluster ID as well. > {quote} > Once we move ZK out of the picture, cluster ID sits behind an end point that > needs to be authenticated. Figure out a way to expose this to clients. > One suggestion in the comments (from Andrew) > {quote} > Cluster ID lookup is most easily accomplished with a new servlet on the > HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It > can't share the RPC server endpoint when SASL is enabled because any > interaction with that endpoint must be authenticated. This is ugly but > alternatives seem worse. One alternative would be a second RPC port for APIs > that do not / cannot require prior authentication. > {quote} > There could be implications if SPNEGO is enabled on these http(s) end points. > We need to make sure that it is handled. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: {code:java} 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more {code} was: RegionServer log message: 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at
[jira] [Updated] (HBASE-27267) Delete causes timestamp to be negative
[ https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengsicheng updated HBASE-27267: - Description: RegionServer log message: 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 2022-07-19 12:13:29,324 WARN [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last good position in file, from 1099261 to 1078224 java.io.EOFException: EOF while reading 660 WAL KVs; started reading at 1078317 and read up to 1099261 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 at org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612) at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346) at org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717) at org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81) at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68) at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387) ... 7 more > Delete causes timestamp to be negative > -- > > Key: HBASE-27267 > URL: https://issues.apache.org/jira/browse/HBASE-27267 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.4 >Reporter: zhengsicheng >Priority: Major > > RegionServer log message: > 2022-07-19 12:13:29,324 WARN > [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] > hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, > KeyValueBytesHex=\x00\x00\x00, offset=0, length=40 > 2022-07-19 12:13:29,324 WARN > [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB] > wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last > good position in file, from 1099261 to 1078224 > java.io.EOFException: EOF while reading 660 WAL KVs; started reading at > 1078317 and read up to 1099261 > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) > Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, > ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00,
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1202693543 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 14s | master passed | | +1 :green_heart: | compile | 6m 17s | master passed | | +1 :green_heart: | checkstyle | 1m 2s | master passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 8m 43s | master passed | | -0 :warning: | patch | 7m 40s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 13s | the patch passed | | +1 :green_heart: | compile | 6m 19s | the patch passed | | -0 :warning: | javac | 6m 19s | root generated 1 new + 1066 unchanged - 1 fixed = 1067 total (was 1067) | | -0 :warning: | checkstyle | 1m 0s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shellcheck | 0m 2s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 29s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | +1 :green_heart: | spotless | 0m 42s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 9m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 20s | The patch does not generate ASF License warnings. | | | | 57m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | dupname asflicense spotless shellcheck shelldocs javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux fc5dca459956 5.4.0-122-generic #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d734acc00e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/artifact/yetus-general-check/output/diff-compile-javac-root.txt | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/artifact/yetus-general-check/output/diff-checkstyle-root.txt | | Max. process+thread count | 139 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/2/console | | versions | git=2.17.1 maven=3.6.3 shellcheck=0.4.6 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-27072) TestCoprocessorEndpointTracing.traceAsyncTableEndpoint is flaky
[ https://issues.apache.org/jira/browse/HBASE-27072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17574245#comment-17574245 ] Nick Dimiduk commented on HBASE-27072: -- So there's been a couple tracing related flaky tests raised for me since the main patches landed. They're baffling because, where i have test logs, I can see that the missing span is present in the collection. It's as if the collection is too large for hamcrest to consider all of... > TestCoprocessorEndpointTracing.traceAsyncTableEndpoint is flaky > --- > > Key: HBASE-27072 > URL: https://issues.apache.org/jira/browse/HBASE-27072 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0 > Environment: Java version: 1.8.0_322 > OS name: "linux", version: "5.10.0-13-arm64", arch: "aarch64", family: "unix" >Reporter: Andrew Kyle Purtell >Priority: Minor > Fix For: 2.5.1, 3.0.0-alpha-4 > > > org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpointTracing.traceAsyncTableEndpoint > > Run 1: TestCoprocessorEndpointTracing.traceAsyncTableEndpoint:210 > Expected: a collection containing (SpanKind with a name that a string > containing "COPROC_EXEC" and SpanKind with a parentSpanId that > "d2a7bb4f52ee70a2" and SpanData with StatusCode that is ) > but: SpanKind with a name that a string containing "COPROC_EXEC" name > was "ZKConnectionRegistry.getMetaRegionLocations", SpanKind with a name that > a string containing "COPROC_EXEC" name was > "AsyncRegionLocator.getRegionLocation", SpanKind with a name that a string > containing "COPROC_EXEC" name was > "hbase.pb.RegionServerStatusService/RegionServerReport", SpanKind with a name > that a string containing "COPROC_EXEC" name was "RpcServer.process", SpanKind > with a name that a string containing "COPROC_EXEC" name was > "hbase.pb.RegionServerStatusService/RegionServerReport", SpanKind with a name > that a string containing "COPROC_EXEC" name was "Region.getScanner", SpanKind > with a name that a string containing "COPROC_EXEC" name was > "hbase.pb.ClientService/Scan", SpanKind with a name that a string containing > "COPROC_EXEC" name was "RegionScanner.close", SpanKind with a name that a > string containing "COPROC_EXEC" name was "RpcServer.process", SpanKind with a > name that a string containing "COPROC_EXEC" name was > "AsyncRegionLocator.getRegionLocation", SpanKind with a name that a string > containing "COPROC_EXEC" name was "hbase.pb.ClientService/ExecService", > SpanKind with a name that a string containing "COPROC_EXEC" name was > "RpcServer.process", SpanKind with a name that a string containing > "COPROC_EXEC" name was "AsyncRegionLocator.getRegionLocation", SpanKind with > a name that a string containing "COPROC_EXEC" name was "SCAN hbase:meta", > SpanKind with a name that a string containing "COPROC_EXEC" name was > "hbase.pb.ClientService/Scan", SpanKind with a name that a string containing > "COPROC_EXEC" name was "traceAsyncTableEndpoint" > Run 2: PASS -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1202563499 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 21s | master passed | | +1 :green_heart: | compile | 1m 26s | master passed | | +1 :green_heart: | shadedjars | 3m 56s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 40s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 9s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | -1 :x: | shadedjars | 3m 1s | patch has 10 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 1m 18s | root generated 4 new + 80 unchanged - 4 fixed = 84 total (was 84) | ||| _ Other Tests _ | | -1 :x: | unit | 322m 14s | root in the patch failed. | | | | 343m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux bc15bfe80fb7 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | javadoc | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-root.txt | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/testReport/ | | Max. process+thread count | 2380 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
Apache-HBase commented on PR #4675: URL: https://github.com/apache/hbase/pull/4675#issuecomment-1202428600 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 28s | master passed | | +1 :green_heart: | compile | 2m 15s | master passed | | +1 :green_heart: | checkstyle | 0m 33s | master passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 9s | the patch passed | | +1 :green_heart: | compile | 2m 11s | the patch passed | | +1 :green_heart: | javac | 2m 11s | the patch passed | | -0 :warning: | checkstyle | 0m 31s | hbase-server: The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 32s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | -1 :x: | spotless | 0m 36s | patch has 65 errors when running spotless:check, run spotless:apply to fix. | | +1 :green_heart: | spotbugs | 1m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 8s | The patch does not generate ASF License warnings. | | | | 32m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4675 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 08dc61e24ed0 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/artifact/yetus-general-check/output/patch-spotless.txt | | Max. process+thread count | 60 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4675/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache-HBase commented on PR #4673: URL: https://github.com/apache/hbase/pull/4673#issuecomment-1202418017 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 42s | master passed | | +1 :green_heart: | compile | 1m 44s | master passed | | +1 :green_heart: | shadedjars | 3m 53s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 14s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 9s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | the patch passed | | +1 :green_heart: | compile | 1m 42s | the patch passed | | +1 :green_heart: | javac | 1m 42s | the patch passed | | -1 :x: | shadedjars | 3m 7s | patch has 10 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 251m 40s | root in the patch passed. | | | | 278m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4673 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 10b656d0f39f 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-11.0.10+9 | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/testReport/ | | Max. process+thread count | 4845 (vs. ulimit of 3) | | modules | C: hbase-server . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4673/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27153) Improvements to read-path tracing
[ https://issues.apache.org/jira/browse/HBASE-27153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-27153: - Resolution: Fixed Status: Resolved (was: Patch Available) > Improvements to read-path tracing > - > > Key: HBASE-27153 > URL: https://issues.apache.org/jira/browse/HBASE-27153 > Project: HBase > Issue Type: Improvement > Components: Operability, regionserver >Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.6.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-4 > > > Take another pass through tracing of the read path, make adjustments > accordingly. One of the major concerns raised previously is that we create a > span for every block access. Start by simplifying this to trace events and > see what else comes up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] ndimiduk merged pull request #4574: Backport "HBASE-27153 Improvements to read-path tracing" to branch-2.5
ndimiduk merged PR #4574: URL: https://github.com/apache/hbase/pull/4574 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk merged pull request #4573: Backport "HBASE-27153 Improvements to read-path tracing" to branch-2
ndimiduk merged PR #4573: URL: https://github.com/apache/hbase/pull/4573 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk merged pull request #4572: HBASE-27153 Improvements to read-path tracing
ndimiduk merged PR #4572: URL: https://github.com/apache/hbase/pull/4572 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27264) Add options to consider compressed size when delimiting blocks during hfile writes
[ https://issues.apache.org/jira/browse/HBASE-27264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27264: - Status: Patch Available (was: In Progress) > Add options to consider compressed size when delimiting blocks during hfile > writes > -- > > Key: HBASE-27264 > URL: https://issues.apache.org/jira/browse/HBASE-27264 > Project: HBase > Issue Type: New Feature >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > > In HBASE-27232 we had modified "hbase.writer.unified.encoded.blocksize.ratio" > property soo that it can allow for the encoded size to be considered when > delimiting hfiles blocks during writes. > Here we propose two additional properties,"hbase.block.size.limit.compressed" > and "hbase.block.size.max.compressed" that would allow for consider the > compressed size (if compression is in use) for delimiting blocks during hfile > writing. When compression is enabled, certain datasets can have very high > compression efficiency, so that the default 64KB block size and 10GB max file > size can lead to hfiles with very large number of blocks. > In this proposal, "hbase.block.size.limit.compressed" is a boolean flag that > switches to compressed size for delimiting blocks, and > "hbase.block.size.max.compressed" is an int with the limit, in bytes for the > compressed block size, in order to avoid very large uncompressed blocks > (defaulting to 320KB). > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27264) Add options to consider compressed size when delimiting blocks during hfile writes
[ https://issues.apache.org/jira/browse/HBASE-27264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17574194#comment-17574194 ] Wellington Chevreuil commented on HBASE-27264: -- [~andrea-rockt] FYI > Add options to consider compressed size when delimiting blocks during hfile > writes > -- > > Key: HBASE-27264 > URL: https://issues.apache.org/jira/browse/HBASE-27264 > Project: HBase > Issue Type: New Feature >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > > In HBASE-27232 we had modified "hbase.writer.unified.encoded.blocksize.ratio" > property soo that it can allow for the encoded size to be considered when > delimiting hfiles blocks during writes. > Here we propose two additional properties,"hbase.block.size.limit.compressed" > and "hbase.block.size.max.compressed" that would allow for consider the > compressed size (if compression is in use) for delimiting blocks during hfile > writing. When compression is enabled, certain datasets can have very high > compression efficiency, so that the default 64KB block size and 10GB max file > size can lead to hfiles with very large number of blocks. > In this proposal, "hbase.block.size.limit.compressed" is a boolean flag that > switches to compressed size for delimiting blocks, and > "hbase.block.size.max.compressed" is an int with the limit, in bytes for the > compressed block size, in order to avoid very large uncompressed blocks > (defaulting to 320KB). > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] wchevreuil opened a new pull request, #4675: HBASE-27264 Add options to consider compressed size when delimiting blocks during hfile writes
wchevreuil opened a new pull request, #4675: URL: https://github.com/apache/hbase/pull/4675 Here we propose two additional properties,"hbase.block.size.limit.compressed" and "hbase.block.size.max.compressed" that would allow for consider the compressed size (if compression is in use) for delimiting blocks during hfile writing. When compression is enabled, certain datasets can have very high compression efficiency, so that the default 64KB block size and 10GB max file size can lead to hfiles with very large number of blocks. In this proposal, "hbase.block.size.limit.compressed" is a boolean flag that switches to compressed size for delimiting blocks, and "hbase.block.size.max.compressed" is an int with the limit, in bytes for the compressed block size, in order to avoid very large uncompressed blocks (defaulting to 320KB). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27147) [HBCK2] extraRegionsInMeta does not work If RegionInfo is null
[ https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27147: - Affects Version/s: hbase-operator-tools-1.2.0 > [HBCK2] extraRegionsInMeta does not work If RegionInfo is null > -- > > Key: HBASE-27147 > URL: https://issues.apache.org/jira/browse/HBASE-27147 > Project: HBase > Issue Type: Bug > Components: hbck2 >Affects Versions: hbase-operator-tools-1.2.0 >Reporter: Karthik Palanisamy >Assignee: Wellington Chevreuil >Priority: Major > > extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is > missing. > > Somehow, the customer has the following empty row in meta as a stale. > 'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', > 16223 > > And no corresponding table "I1xx" exist. > > We use extraRegionsInMeta but it didn't clean. Also, we created same table > again and used extraRegionsInMeta after removing HDFS data but the stale row > never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" > is present. > > We need to handle the scenario for other columns I.e info:state, info:server, > etc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27147) [HBCK2] extraRegionsInMeta does not work If RegionInfo is null
[ https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil resolved HBASE-27147. -- Resolution: Fixed > [HBCK2] extraRegionsInMeta does not work If RegionInfo is null > -- > > Key: HBASE-27147 > URL: https://issues.apache.org/jira/browse/HBASE-27147 > Project: HBase > Issue Type: Bug > Components: hbck2 >Reporter: Karthik Palanisamy >Assignee: Wellington Chevreuil >Priority: Major > > extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is > missing. > > Somehow, the customer has the following empty row in meta as a stale. > 'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', > 16223 > > And no corresponding table "I1xx" exist. > > We use extraRegionsInMeta but it didn't clean. Also, we created same table > again and used extraRegionsInMeta after removing HDFS data but the stale row > never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" > is present. > > We need to handle the scenario for other columns I.e info:state, info:server, > etc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27147) [HBCK2] extraRegionsInMeta does not work If RegionInfo is null
[ https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27147: - Fix Version/s: hbase-operator-tools-1.3.0 > [HBCK2] extraRegionsInMeta does not work If RegionInfo is null > -- > > Key: HBASE-27147 > URL: https://issues.apache.org/jira/browse/HBASE-27147 > Project: HBase > Issue Type: Bug > Components: hbck2 >Affects Versions: hbase-operator-tools-1.2.0 >Reporter: Karthik Palanisamy >Assignee: Wellington Chevreuil >Priority: Major > Fix For: hbase-operator-tools-1.3.0 > > > extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is > missing. > > Somehow, the customer has the following empty row in meta as a stale. > 'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', > 16223 > > And no corresponding table "I1xx" exist. > > We use extraRegionsInMeta but it didn't clean. Also, we created same table > again and used extraRegionsInMeta after removing HDFS data but the stale row > never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" > is present. > > We need to handle the scenario for other columns I.e info:state, info:server, > etc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase-operator-tools] wchevreuil merged pull request #110: HBASE-27147 [HBCK2] extraRegionsInMeta does not work If RegionInfo is…
wchevreuil merged PR #110: URL: https://github.com/apache/hbase-operator-tools/pull/110 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-27267) Delete causes timestamp to be negative
zhengsicheng created HBASE-27267: Summary: Delete causes timestamp to be negative Key: HBASE-27267 URL: https://issues.apache.org/jira/browse/HBASE-27267 Project: HBase Issue Type: Bug Affects Versions: 2.3.4 Reporter: zhengsicheng -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #4670: HBASE-27237 Address is shoule be case insensitive
Apache-HBase commented on PR #4670: URL: https://github.com/apache/hbase/pull/4670#issuecomment-1202289274 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 48s | master passed | | +1 :green_heart: | compile | 1m 26s | master passed | | +1 :green_heart: | shadedjars | 3m 43s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 46s | the patch passed | | +1 :green_heart: | compile | 1m 25s | the patch passed | | +1 :green_heart: | javac | 1m 25s | the patch passed | | +1 :green_heart: | shadedjars | 3m 44s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 46s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 16s | hbase-client in the patch passed. | | -1 :x: | unit | 207m 30s | hbase-server in the patch failed. | | | | 236m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4670 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 42e266757b60 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/testReport/ | | Max. process+thread count | 2417 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4674: HBASE-27261 Generate CHANGES.txt for 1.7.2
Apache-HBase commented on PR #4674: URL: https://github.com/apache/hbase/pull/4674#issuecomment-1202287057 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 25s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-1 Compile Tests _ | ||| _ Patch Compile Tests _ | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 1m 52s | The patch does not generate ASF License warnings. | | | | 9m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4674/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4674 | | Optional Tests | dupname asflicense | | uname | Linux d4fdf79dc29c 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 11 12:03:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-4674/out/precommit/personality/provided.sh | | git revision | branch-1 / 41e9e52e74 | | Max. process+thread count | 31 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4674/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4670: HBASE-27237 Address is shoule be case insensitive
Apache-HBase commented on PR #4670: URL: https://github.com/apache/hbase/pull/4670#issuecomment-1202279012 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 22s | master passed | | +1 :green_heart: | compile | 1m 3s | master passed | | +1 :green_heart: | shadedjars | 4m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 48s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 4m 2s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 7s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 202m 23s | hbase-server in the patch passed. | | | | 226m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4670 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b5db7329e217 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e8c14ee308 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/testReport/ | | Max. process+thread count | 2631 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a diff in pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
Apache9 commented on code in PR #4673: URL: https://github.com/apache/hbase/pull/4673#discussion_r935355005 ## bin/hbase: ## @@ -83,6 +83,7 @@ show_usage() { if [ "${in_omnibus_tarball}" = "true" ]; then echo " wal Write-ahead-log analyzer" echo " hfileStore file analyzer" +echo " sft Store file tracker viewer" Review Comment: What about the hbase.cmd for windows? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-27266) Consider bloom filter alternates
Duo Zhang created HBASE-27266: - Summary: Consider bloom filter alternates Key: HBASE-27266 URL: https://issues.apache.org/jira/browse/HBASE-27266 Project: HBase Issue Type: New Feature Components: Performance Reporter: Duo Zhang The XOR filter https://arxiv.org/pdf/1912.08258.pdf The Ribbon filter https://arxiv.org/pdf/2103.02515.pdf We could see if we can integrate these new data structure in HBase to replace the usage of bloom filter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] wchevreuil commented on a diff in pull request #4673: HBASE-27265 : Tool to read StoreFileTrackerFile
wchevreuil commented on code in PR #4673: URL: https://github.com/apache/hbase/pull/4673#discussion_r935341877 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/StoreFileListFilePrettyPrinter.java: ## @@ -0,0 +1,192 @@ +package org.apache.hadoop.hbase.regionserver.storefiletracker; + +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; +import org.apache.hadoop.fs.*; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.StoreContext; +import org.apache.hadoop.hbase.shaded.protobuf.generated.StoreFileTrackerProtos.StoreFileList; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; +import org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException; +import org.apache.hbase.thirdparty.org.apache.commons.cli.Options; +import org.apache.hbase.thirdparty.org.apache.commons.cli.*; +import org.apache.yetus.audience.InterfaceAudience; +import org.apache.yetus.audience.InterfaceStability; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import java.io.IOException; +import java.io.PrintStream; +import java.util.zip.CRC32; + +@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS) +@InterfaceStability.Evolving +public class StoreFileListFilePrettyPrinter extends Configured implements Tool { + private static final Logger LOG = LoggerFactory.getLogger(StoreFileListFilePrettyPrinter.class); + + private Options options = new Options(); + + private final String fileOption = "f"; + private final String columnFamilyOption = "cf"; + private final String regionOption = "r"; + private final String tableNameOption = "t"; + + private String namespace; + private String regionName; + private String columnFamily; + private String tableName; + private Path path; + private PrintStream err = System.err; + private PrintStream out = System.out; + + public StoreFileListFilePrettyPrinter() { +super(); +init(); + } + + public StoreFileListFilePrettyPrinter(Configuration conf) { +super(conf); +init(); + } + + private void init() { +OptionGroup files = new OptionGroup(); +options.addOption(new Option(tableNameOption, "table", true, + "Table to scan. Pass table name; e.g. test_table")); +options.addOption(new Option(columnFamilyOption, "columnfamily", true, + "column family to scan. Pass column family name; e.g. f")); +files.addOption(new Option(regionOption, "region", true, "Region to scan. Pass region name; e.g. '3d58e9067bf23e378e68c071f3dd39eb'")); +files.addOption(new Option(fileOption, "file", true, "File to scan. Pass full-path; e.g. hdfs://a:9000/hbase/hbase:meta/12/34")); +options.addOptionGroup(files); + } + + public boolean parseOptions(String args[]) throws ParseException, IOException { +HelpFormatter formatter = new HelpFormatter(); +if (args.length == 0) { + formatter.printHelp("sft [--file= | --table= --region= [--columnFamily=] ]", options, true); + return false; +} + +CommandLineParser parser = new PosixParser(); +CommandLine cmd = parser.parse(options, args); + +if (cmd.hasOption(fileOption)) { + path = new Path(cmd.getOptionValue(fileOption)); +} else { + regionName = cmd.getOptionValue(regionOption); + if(StringUtils.isEmpty(regionName)) { +err.println("Region name is not specified."); +formatter.printHelp("sft [--file= | --table= --region= [--columnFamily=] ]", options, true); +System.exit(-1); + } + columnFamily = cmd.getOptionValue(columnFamilyOption); + if(StringUtils.isEmpty(columnFamily)) { +err.println("Column family is not specified."); +formatter.printHelp("sft [--file= | --table= --region= [--columnFamily=] ]", options, true); +System.exit(-1); + } + String tableNameWtihNS = cmd.getOptionValue(tableNameOption); + if(StringUtils.isEmpty(tableNameWtihNS)) { +err.println("Table name is not specified."); +formatter.printHelp("sft [--file= | --table= --region= [--columnFamily=] ]", options, true); +System.exit(-1); + } + TableName tn = TableName.valueOf(tableNameWtihNS); + namespace = tn.getNamespaceAsString(); + tableName = tn.getNameAsString(); +} +return true; + } + + public int run(String[] args) { +if(getConf() == null) { + throw new RuntimeException("A Configuration instance must be provided."); +} +try { +
[jira] [Updated] (HBASE-27261) Generate CHANGES.md and RELEASENOTES.md for 1.7.2
[ https://issues.apache.org/jira/browse/HBASE-27261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-27261: -- Fix Version/s: 1.7.2 > Generate CHANGES.md and RELEASENOTES.md for 1.7.2 > - > > Key: HBASE-27261 > URL: https://issues.apache.org/jira/browse/HBASE-27261 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 1.7.2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27261) Generate CHANGES.txt for 1.7.2
[ https://issues.apache.org/jira/browse/HBASE-27261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-27261: -- Summary: Generate CHANGES.txt for 1.7.2 (was: Generate CHANGES.md and RELEASENOTES.md for 1.7.2) > Generate CHANGES.txt for 1.7.2 > -- > > Key: HBASE-27261 > URL: https://issues.apache.org/jira/browse/HBASE-27261 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 1.7.2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25217) [Metrics] Add metrics for Call in IPC response queue
[ https://issues.apache.org/jira/browse/HBASE-25217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25217: -- Fix Version/s: (was: 1.8.0) > [Metrics] Add metrics for Call in IPC response queue > > > Key: HBASE-25217 > URL: https://issues.apache.org/jira/browse/HBASE-25217 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Minor > Fix For: 3.0.0-alpha-4 > > > Add metrics for response queue. > E.g., number of Call/RpcResponse in queue, size of Call/RpcResponse in queue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-26518) Concurrent assign and SCP may cause regionserver abort
[ https://issues.apache.org/jira/browse/HBASE-26518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26518: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) > Concurrent assign and SCP may cause regionserver abort > -- > > Key: HBASE-26518 > URL: https://issues.apache.org/jira/browse/HBASE-26518 > Project: HBase > Issue Type: Bug > Components: master, Region Assignment >Affects Versions: 1.7.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: master-error-logs.png > > > When concurrent assign a region, the RS may abort when reporting region state > to master encounters transit region state error, > > {code:java} > 2021-11-26 04:40:54,689 INFO > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: Post open deploy tasks for > growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65. > 2021-11-26 04:40:54,689 INFO > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: Failed to transition {ENCODED => > ee1df69da9c7fb7698d2972f81d40b65, NAME => > 'growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65.', > STARTKEY => 'cc', ENDKEY => 'cd'} to OPENED: > ee1df69da9c7fb7698d2972f81d40b65 is not pending open on > zf-data-hbase250.mt,16020,1637872783437 > 2021-11-26 04:40:54,689 FATAL > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: ABORTING region server > zf-data-hbase250.mt,16020,1637872783437: Exception running > postOpenDeployTasks; region=ee1df69da9c7fb7698d2972f81d40b65 > java.io.IOException: Failed to report opened region to master: > growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65. > at > org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:2387) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:327) > {code} > In the master log, we can see that the region is being assigned by both the > bulk assigner and the SCP. > !master-error-logs.png|width=600,height=140! > This steps can reproduce the problem: > # thread 1 calls Assignment#assign to assign region A, and calls > RegionStates#updateRegionsState and stops before the synchronize; > # thread 2 calls RegionStates#serverOffline and gets the lock, it sees that > region A is on the processed dead server and add region A to the > regionsToOffline list(region A is not in the RIT cache now); > # thread 2 prepare to execute RegionStates#regionOffline, and released the > lock; > # thread 1 get the lock, and add region A to the RIT cache, and also update > the region state by RegionStates#putRegionState, set the region state from > OFFLINE to PENDING_OPEN, then release the lock; > # thread2 execute RegionStates#regionOffline, acquires the lock and sets > the region state from PENDING_OPEN to OFFLINE; > # the RS that region A assigned by thread 1 report region A should be > transited to OPENED, but finds that region A last state is OFFLINE, error > occures, the RS abort itself. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-25024) [Flake Test][branch-1] TestClientOperationInterrupt#testInterrupt50Percent
[ https://issues.apache.org/jira/browse/HBASE-25024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-25024. --- Resolution: Won't Fix > [Flake Test][branch-1] TestClientOperationInterrupt#testInterrupt50Percent > -- > > Key: HBASE-25024 > URL: https://issues.apache.org/jira/browse/HBASE-25024 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Priority: Minor > > Sometimes threads could finish faster before interrupt() gets called. > {code} > // ... > t.start(); > } > int expectedNoExNum = nbThread / 2; > for (int i = 0; i < nbThread / 2; i++) { > if (threads.get(i).getState().equals(Thread.State.TERMINATED)) { > expectedNoExNum--; > } > threads.get(i).interrupt(); > } > {code} > So this test could get failed sometimes. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-14964) Backport HBASE-14901 (duplicated code to create/manage encryption keys) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-14964. --- Resolution: Won't Fix > Backport HBASE-14901 (duplicated code to create/manage encryption keys) to > branch-1 > --- > > Key: HBASE-14964 > URL: https://issues.apache.org/jira/browse/HBASE-14964 > Project: HBase > Issue Type: Improvement > Components: encryption >Reporter: Nate Edel >Priority: Minor > Attachments: HBASE-14964-branch-1.1.patch, > HBASE-14964.1.branch-1.patch, HBASE-14964.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > There is duplicated code from MobUtils.createEncryptionContext in HStore, and > there is a subset of that code in HFileReaderImpl. > Refactored key selection > Moved both to EncryptionUtil.java > Can't figure out how to write a unit test for this, but there's no new code > just refactoring. > A lot of the Mob stuff hasn't been backported, so this is a very small patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25024) [Flake Test][branch-1] TestClientOperationInterrupt#testInterrupt50Percent
[ https://issues.apache.org/jira/browse/HBASE-25024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25024: -- Fix Version/s: (was: 1.8.0) > [Flake Test][branch-1] TestClientOperationInterrupt#testInterrupt50Percent > -- > > Key: HBASE-25024 > URL: https://issues.apache.org/jira/browse/HBASE-25024 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Priority: Minor > > Sometimes threads could finish faster before interrupt() gets called. > {code} > // ... > t.start(); > } > int expectedNoExNum = nbThread / 2; > for (int i = 0; i < nbThread / 2; i++) { > if (threads.get(i).getState().equals(Thread.State.TERMINATED)) { > expectedNoExNum--; > } > threads.get(i).interrupt(); > } > {code} > So this test could get failed sometimes. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-14964) Backport HBASE-14901 (duplicated code to create/manage encryption keys) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-14964: -- Fix Version/s: (was: 1.8.0) > Backport HBASE-14901 (duplicated code to create/manage encryption keys) to > branch-1 > --- > > Key: HBASE-14964 > URL: https://issues.apache.org/jira/browse/HBASE-14964 > Project: HBase > Issue Type: Improvement > Components: encryption >Reporter: Nate Edel >Priority: Minor > Attachments: HBASE-14964-branch-1.1.patch, > HBASE-14964.1.branch-1.patch, HBASE-14964.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > There is duplicated code from MobUtils.createEncryptionContext in HStore, and > there is a subset of that code in HFileReaderImpl. > Refactored key selection > Moved both to EncryptionUtil.java > Can't figure out how to write a unit test for this, but there's no new code > just refactoring. > A lot of the Mob stuff hasn't been backported, so this is a very small patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 closed pull request #3900: HBASE-26518 Concurrent assign and SCP may cause regionserver …
Apache9 closed pull request #3900: HBASE-26518 Concurrent assign and SCP may cause regionserver … URL: https://github.com/apache/hbase/pull/3900 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3900: HBASE-26518 Concurrent assign and SCP may cause regionserver …
Apache9 commented on PR #3900: URL: https://github.com/apache/hbase/pull/3900#issuecomment-1202224173 The same with HBASE-25783, we plan to EOL branch-1 so I tend to not include the changes which touch the core part. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26518) Concurrent assign and SCP may cause regionserver abort
[ https://issues.apache.org/jira/browse/HBASE-26518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26518: -- Fix Version/s: (was: 1.8.0) > Concurrent assign and SCP may cause regionserver abort > -- > > Key: HBASE-26518 > URL: https://issues.apache.org/jira/browse/HBASE-26518 > Project: HBase > Issue Type: Bug > Components: master, Region Assignment >Affects Versions: 1.7.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Attachments: master-error-logs.png > > > When concurrent assign a region, the RS may abort when reporting region state > to master encounters transit region state error, > > {code:java} > 2021-11-26 04:40:54,689 INFO > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: Post open deploy tasks for > growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65. > 2021-11-26 04:40:54,689 INFO > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: Failed to transition {ENCODED => > ee1df69da9c7fb7698d2972f81d40b65, NAME => > 'growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65.', > STARTKEY => 'cc', ENDKEY => 'cd'} to OPENED: > ee1df69da9c7fb7698d2972f81d40b65 is not pending open on > zf-data-hbase250.mt,16020,1637872783437 > 2021-11-26 04:40:54,689 FATAL > [PostOpenDeployTasks:ee1df69da9c7fb7698d2972f81d40b65] > regionserver.HRegionServer: ABORTING region server > zf-data-hbase250.mt,16020,1637872783437: Exception running > postOpenDeployTasks; region=ee1df69da9c7fb7698d2972f81d40b65 > java.io.IOException: Failed to report opened region to master: > growthosudx:growthos-udx_dmx_supply_waimaisku_application_basic,cc,1631020952330.ee1df69da9c7fb7698d2972f81d40b65. > at > org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:2387) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:327) > {code} > In the master log, we can see that the region is being assigned by both the > bulk assigner and the SCP. > !master-error-logs.png|width=600,height=140! > This steps can reproduce the problem: > # thread 1 calls Assignment#assign to assign region A, and calls > RegionStates#updateRegionsState and stops before the synchronize; > # thread 2 calls RegionStates#serverOffline and gets the lock, it sees that > region A is on the processed dead server and add region A to the > regionsToOffline list(region A is not in the RIT cache now); > # thread 2 prepare to execute RegionStates#regionOffline, and released the > lock; > # thread 1 get the lock, and add region A to the RIT cache, and also update > the region state by RegionStates#putRegionState, set the region state from > OFFLINE to PENDING_OPEN, then release the lock; > # thread2 execute RegionStates#regionOffline, acquires the lock and sets > the region state from PENDING_OPEN to OFFLINE; > # the RS that region A assigned by thread 1 report region A should be > transited to OPENED, but finds that region A last state is OFFLINE, error > occures, the RS abort itself. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25783) Close PENDING_OPEN regions make regions FAILED_CLOSE
[ https://issues.apache.org/jira/browse/HBASE-25783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25783: -- Fix Version/s: (was: 1.8.0) Resolution: Won't Fix Status: Resolved (was: Patch Available) > Close PENDING_OPEN regions make regions FAILED_CLOSE > > > Key: HBASE-25783 > URL: https://issues.apache.org/jira/browse/HBASE-25783 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.7.1 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > > There is a case on our production cluster, > # RS1 crash; > # bulk assign some RS1 regions to RS2, including region A; > # RS2 crash; > # bulk assign RS2 regions to RS3, including region A; > # the first assign of A to RS2 failed and reassign A; > # close A on RS3, but A is PENDING_OPEN on RS3; > # A FAILED_CLOSE; > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 closed pull request #4175: HBASE-25783 Close PENDING_OPEN regions make regions FAILED_CLOSE
Apache9 closed pull request #4175: HBASE-25783 Close PENDING_OPEN regions make regions FAILED_CLOSE URL: https://github.com/apache/hbase/pull/4175 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #4175: HBASE-25783 Close PENDING_OPEN regions make regions FAILED_CLOSE
Apache9 commented on PR #4175: URL: https://github.com/apache/hbase/pull/4175#issuecomment-120190 Branch-1 will be EOL soon so I tend to not include this sensitive changes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25664) Adding replication peer should handle the undeleted queue exception
[ https://issues.apache.org/jira/browse/HBASE-25664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-25664. --- Assignee: (was: Sandeep Pal) Resolution: Won't Fix > Adding replication peer should handle the undeleted queue exception > --- > > Key: HBASE-25664 > URL: https://issues.apache.org/jira/browse/HBASE-25664 > Project: HBase > Issue Type: Improvement >Reporter: Sandeep Pal >Priority: Major > > Currently, if we try to add a peer and there is a replication queue existing > for that peer, it doesn't let the replication peer created. > Instead, we should delete the queue and proceed with the creating of > replication peer. Any queue without no corresponding replication peer is > useless anyway. So, we shouldn't wait for cleaner to come and clean it before > creating the peer. > > {code:java} > org.apache.hadoop.hbase.replication.ReplicationException: undeleted queue for > peerId: xyz_peerid, replicator: hostname.fakeaddress.com,60020,1607576586258, > queueId: xyz_peerid > java.lang.RuntimeException: > org.apache.hadoop.hbase.replication.ReplicationException: undeleted queue for > peerId: xyz_peerid, replicator: hostname.fakeaddress.com,60020,1607576586258, > queueId: xyz_peerid > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-23948) Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-23948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23948: -- Fix Version/s: (was: 1.8.0) > Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to > branch-1 > -- > > Key: HBASE-23948 > URL: https://issues.apache.org/jira/browse/HBASE-23948 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Priority: Major > > Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to > branch-1, including updates to REST (HBASE-23924) and Thrift (HBASE-23925). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-23768) Backport HBASE-17115 (HMaster/HRegion Info Server does not honour admin.acl) to 1.x
[ https://issues.apache.org/jira/browse/HBASE-23768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23768. --- Resolution: Won't Fix > Backport HBASE-17115 (HMaster/HRegion Info Server does not honour admin.acl) > to 1.x > --- > > Key: HBASE-23768 > URL: https://issues.apache.org/jira/browse/HBASE-23768 > Project: HBase > Issue Type: Sub-task >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25664) Adding replication peer should handle the undeleted queue exception
[ https://issues.apache.org/jira/browse/HBASE-25664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25664: -- Fix Version/s: (was: 1.8.0) > Adding replication peer should handle the undeleted queue exception > --- > > Key: HBASE-25664 > URL: https://issues.apache.org/jira/browse/HBASE-25664 > Project: HBase > Issue Type: Improvement >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Major > > Currently, if we try to add a peer and there is a replication queue existing > for that peer, it doesn't let the replication peer created. > Instead, we should delete the queue and proceed with the creating of > replication peer. Any queue without no corresponding replication peer is > useless anyway. So, we shouldn't wait for cleaner to come and clean it before > creating the peer. > > {code:java} > org.apache.hadoop.hbase.replication.ReplicationException: undeleted queue for > peerId: xyz_peerid, replicator: hostname.fakeaddress.com,60020,1607576586258, > queueId: xyz_peerid > java.lang.RuntimeException: > org.apache.hadoop.hbase.replication.ReplicationException: undeleted queue for > peerId: xyz_peerid, replicator: hostname.fakeaddress.com,60020,1607576586258, > queueId: xyz_peerid > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-23948) Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-23948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23948. --- Resolution: Won't Fix > Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to > branch-1 > -- > > Key: HBASE-23948 > URL: https://issues.apache.org/jira/browse/HBASE-23948 > Project: HBase > Issue Type: Improvement >Reporter: Andrew Kyle Purtell >Priority: Major > > Backport HBASE-23146 (Support CheckAndMutate with multiple conditions) to > branch-1, including updates to REST (HBASE-23924) and Thrift (HBASE-23925). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-25510) Optimize TableName.valueOf from O(n) to O(1). We can get benefits when the number of tables in the cluster is greater than dozens
[ https://issues.apache.org/jira/browse/HBASE-25510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25510: -- Fix Version/s: (was: 1.8.0) > Optimize TableName.valueOf from O(n) to O(1). We can get benefits when the > number of tables in the cluster is greater than dozens > -- > > Key: HBASE-25510 > URL: https://issues.apache.org/jira/browse/HBASE-25510 > Project: HBase > Issue Type: Improvement > Components: master, Replication >Affects Versions: 1.2.12, 1.4.13, 2.4.1 >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4 > > Attachments: TestTableNameJMH.java, optimiz_benchmark, > optimiz_benchmark_fix, origin_benchmark, stucks-profile-info > > > Now, TableName.valueOf will try to find TableName Object in cache > linearly(code show as below). So it is too slow when we has thousands of > tables on cluster. > {code:java} > // code placeholder > for (TableName tn : tableCache) { > if (Bytes.equals(tn.getQualifier(), qns) && Bytes.equals(tn.getNamespace(), > bns)) { > return tn; > } > }{code} > I try to store the object in the hash table, so it can look up more quickly. > code like this > {code:java} > // code placeholder > TableName oldTable = tableCache.get(nameAsStr);{code} > > In our cluster which has tens thousands of tables. (Most of that is KYLIN > table). > We found that in the following two cases, the TableName.valueOf method will > severely restrict our performance. > > Common premise: tens of thousands table in cluster > cause: TableName.valueOf with low performance. (because we need to traverse > all caches linearly) > > Case1. Replication > premise1: one of table write with high qps, small value, Non-batch request. > cause too much wal entry > premise2: deserialize WAL Entry includes calling the TableName.valueOf method. > Cause: Replicat Stuck. A lot of WAL files pile up. > > Case2. Active Master Start up > NamespaceStateManager init should init all RegionInfo, and regioninfo init > will call TableName.valueOf. It will cost some time if TableName.valueOf is > slow. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-23768) Backport HBASE-17115 (HMaster/HRegion Info Server does not honour admin.acl) to 1.x
[ https://issues.apache.org/jira/browse/HBASE-23768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23768: -- Fix Version/s: (was: 1.8.0) > Backport HBASE-17115 (HMaster/HRegion Info Server does not honour admin.acl) > to 1.x > --- > > Key: HBASE-23768 > URL: https://issues.apache.org/jira/browse/HBASE-23768 > Project: HBase > Issue Type: Sub-task >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-23240) branch-1 master and regionservers do not start when compiled against Hadoop 3.2.1
[ https://issues.apache.org/jira/browse/HBASE-23240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23240. --- Resolution: Won't Fix > branch-1 master and regionservers do not start when compiled against Hadoop > 3.2.1 > - > > Key: HBASE-23240 > URL: https://issues.apache.org/jira/browse/HBASE-23240 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Priority: Major > > Exception in thread "main" java.lang.NoSuchMethodError: > com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V > at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) > at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) > at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679) > at > org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-23147) Branches-1 shaded artifact for mapreduce integration misses MainClass
[ https://issues.apache.org/jira/browse/HBASE-23147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-23147: - Assignee: Zhe Huang > Branches-1 shaded artifact for mapreduce integration misses MainClass > - > > Key: HBASE-23147 > URL: https://issues.apache.org/jira/browse/HBASE-23147 > Project: HBase > Issue Type: Bug > Components: Client, mapreduce >Affects Versions: 1.5.0, 1.3.6, 1.4.11 >Reporter: Sean Busbey >Assignee: Zhe Huang >Priority: Major > Labels: beginner > > the shaded artifact we intend for folks to use when doing mapreduce stuff in > branches-1 is {{hbase-shaded-server}}, but it fails to define the same > {{MainClass}} for the jar as the {{hbase-server}} artifact. This prevents > commands like this from working: > {code} > $ hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > Exception in thread "main" java.lang.ClassNotFoundException: importtsv > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at org.apache.hadoop.util.RunJar.run(RunJar.java:232) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} > The {{hbase-shaded-server}} module pom should have the same maven-jar-plugin > config for setting the main class as the {{hbase-server}} module. > This is not an issue in branches-2+ because as a part of moving this stuff > into a {{hbase-mapreduce}} and {{hbase-shaded-mapreduce}} (HBASE-18697) we > corrected this gap on the shaded artifact. > Work around by specifying the class manually > {code} > hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar > org.apache.hadoop.hbase.mapreduce.Driver importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-23240) branch-1 master and regionservers do not start when compiled against Hadoop 3.2.1
[ https://issues.apache.org/jira/browse/HBASE-23240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23240: -- Fix Version/s: (was: 1.8.0) > branch-1 master and regionservers do not start when compiled against Hadoop > 3.2.1 > - > > Key: HBASE-23240 > URL: https://issues.apache.org/jira/browse/HBASE-23240 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Lars Hofhansl >Priority: Major > > Exception in thread "main" java.lang.NoSuchMethodError: > com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V > at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) > at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338) > at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679) > at > org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:339) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:572) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-23147) Branches-1 shaded artifact for mapreduce integration misses MainClass
[ https://issues.apache.org/jira/browse/HBASE-23147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23147. --- Fix Version/s: 1.7.2 Hadoop Flags: Reviewed Resolution: Fixed Merged to branch-1. Thanks [~Zhe Huang] for contributing! > Branches-1 shaded artifact for mapreduce integration misses MainClass > - > > Key: HBASE-23147 > URL: https://issues.apache.org/jira/browse/HBASE-23147 > Project: HBase > Issue Type: Bug > Components: Client, mapreduce >Affects Versions: 1.5.0, 1.3.6, 1.4.11 >Reporter: Sean Busbey >Assignee: Zhe Huang >Priority: Major > Labels: beginner > Fix For: 1.7.2 > > > the shaded artifact we intend for folks to use when doing mapreduce stuff in > branches-1 is {{hbase-shaded-server}}, but it fails to define the same > {{MainClass}} for the jar as the {{hbase-server}} artifact. This prevents > commands like this from working: > {code} > $ hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > Exception in thread "main" java.lang.ClassNotFoundException: importtsv > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at org.apache.hadoop.util.RunJar.run(RunJar.java:232) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} > The {{hbase-shaded-server}} module pom should have the same maven-jar-plugin > config for setting the main class as the {{hbase-server}} module. > This is not an issue in branches-2+ because as a part of moving this stuff > into a {{hbase-mapreduce}} and {{hbase-shaded-mapreduce}} (HBASE-18697) we > corrected this gap on the shaded artifact. > Work around by specifying the class manually > {code} > hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar > org.apache.hadoop.hbase.mapreduce.Driver importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 merged pull request #3214: HBASE-23147 Branches-1 shaded artifact for mapreduce integration miss…
Apache9 merged PR #3214: URL: https://github.com/apache/hbase/pull/3214 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-27263) Make RowTooBigException configurable for snapshot reads
[ https://issues.apache.org/jira/browse/HBASE-27263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17574133#comment-17574133 ] Ujjawal Kumar commented on HBASE-27263: --- I now see is that that the change proposed above is already there if we think about it. Normally, HRegion control the RowTooBigException via the config {{hbase.table.max.rowsize}} (default as 1gb) during scans. For live tables we can't change this config, since it would require a RS restart for it to be reflected (also this would defeat purpose of this change for OOME protection at RS side). But since snapshot reads creates a new instance of HRegion within the MR task any conf passed to the MR job would also be seen by this HRegion instance. So we can just change this config to a high value for snapshot reads and we wouldn't see this kind of issue anymore. Hence marking it as not required for now. > Make RowTooBigException configurable for snapshot reads > > > Key: HBASE-27263 > URL: https://issues.apache.org/jira/browse/HBASE-27263 > Project: HBase > Issue Type: Improvement > Components: regionserver, snapshots >Reporter: Ujjawal Kumar >Assignee: Ujjawal Kumar >Priority: Minor > Attachments: Screenshot 2022-08-01 at 3.42.57 PM.png > > > While trying to read snapshot data using TableSnapshotInputFormat in an MR > job, we can skip OOME check (introduced by HBASE-10925 for region server side > OOME protection ) since the allocated memory resources would be from MR tasks > themselves (as compared to RS during live scans) > !Screenshot 2022-08-01 at 3.42.57 PM.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27263) Make RowTooBigException configurable for snapshot reads
[ https://issues.apache.org/jira/browse/HBASE-27263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ujjawal Kumar resolved HBASE-27263. --- Resolution: Workaround > Make RowTooBigException configurable for snapshot reads > > > Key: HBASE-27263 > URL: https://issues.apache.org/jira/browse/HBASE-27263 > Project: HBase > Issue Type: Improvement > Components: regionserver, snapshots >Reporter: Ujjawal Kumar >Assignee: Ujjawal Kumar >Priority: Minor > Attachments: Screenshot 2022-08-01 at 3.42.57 PM.png > > > While trying to read snapshot data using TableSnapshotInputFormat in an MR > job, we can skip OOME check (introduced by HBASE-10925 for region server side > OOME protection ) since the allocated memory resources would be from MR tasks > themselves (as compared to RS during live scans) > !Screenshot 2022-08-01 at 3.42.57 PM.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-22126) TestBlocksRead is flaky
[ https://issues.apache.org/jira/browse/HBASE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22126: -- Fix Version/s: (was: 1.8.0) > TestBlocksRead is flaky > --- > > Key: HBASE-22126 > URL: https://issues.apache.org/jira/browse/HBASE-22126 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 1.5.0 >Reporter: Andrew Kyle Purtell >Assignee: Sandeep Pal >Priority: Major > Labels: branch-1 > > TestBlocksRead does not fail when invoked by itself but is flaky when run as > part of the suite. > Some kind of race during setup. > [ERROR] > testBlocksStoredWhenCachingDisabled(org.apache.hadoop.hbase.regionserver.TestBlocksRead) > Time elapsed: 0.19 s <<< ERROR! > java.net.ConnectException: Call From $HOST/$IP to localhost:59658 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.initHRegion(TestBlocksRead.java:112) > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.testBlocksStoredWhenCachingDisabled(TestBlocksRead.java:389) > Caused by: java.net.ConnectException: Connection refused > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.initHRegion(TestBlocksRead.java:112) > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.testBlocksStoredWhenCachingDisabled(TestBlocksRead.java:389) > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-22126) TestBlocksRead is flaky
[ https://issues.apache.org/jira/browse/HBASE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-22126. --- Assignee: (was: Sandeep Pal) Resolution: Won't Fix > TestBlocksRead is flaky > --- > > Key: HBASE-22126 > URL: https://issues.apache.org/jira/browse/HBASE-22126 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 1.5.0 >Reporter: Andrew Kyle Purtell >Priority: Major > Labels: branch-1 > > TestBlocksRead does not fail when invoked by itself but is flaky when run as > part of the suite. > Some kind of race during setup. > [ERROR] > testBlocksStoredWhenCachingDisabled(org.apache.hadoop.hbase.regionserver.TestBlocksRead) > Time elapsed: 0.19 s <<< ERROR! > java.net.ConnectException: Call From $HOST/$IP to localhost:59658 failed on > connection exception: java.net.ConnectException: Connection refused; For more > details see: http://wiki.apache.org/hadoop/ConnectionRefused > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.initHRegion(TestBlocksRead.java:112) > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.testBlocksStoredWhenCachingDisabled(TestBlocksRead.java:389) > Caused by: java.net.ConnectException: Connection refused > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.initHRegion(TestBlocksRead.java:112) > at > org.apache.hadoop.hbase.regionserver.TestBlocksRead.testBlocksStoredWhenCachingDisabled(TestBlocksRead.java:389) > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-23147) Branches-1 shaded artifact for mapreduce integration misses MainClass
[ https://issues.apache.org/jira/browse/HBASE-23147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23147: -- Fix Version/s: (was: 1.8.0) > Branches-1 shaded artifact for mapreduce integration misses MainClass > - > > Key: HBASE-23147 > URL: https://issues.apache.org/jira/browse/HBASE-23147 > Project: HBase > Issue Type: Bug > Components: Client, mapreduce >Affects Versions: 1.5.0, 1.3.6, 1.4.11 >Reporter: Sean Busbey >Priority: Major > Labels: beginner > > the shaded artifact we intend for folks to use when doing mapreduce stuff in > branches-1 is {{hbase-shaded-server}}, but it fails to define the same > {{MainClass}} for the jar as the {{hbase-server}} artifact. This prevents > commands like this from working: > {code} > $ hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > Exception in thread "main" java.lang.ClassNotFoundException: importtsv > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at org.apache.hadoop.util.RunJar.run(RunJar.java:232) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} > The {{hbase-shaded-server}} module pom should have the same maven-jar-plugin > config for setting the main class as the {{hbase-server}} module. > This is not an issue in branches-2+ because as a part of moving this stuff > into a {{hbase-mapreduce}} and {{hbase-shaded-mapreduce}} (HBASE-18697) we > corrected this gap on the shaded artifact. > Work around by specifying the class manually > {code} > hadoop jar some/path/to/hbase-shaded-server-1.4.11-SNAPSHOT.jar > org.apache.hadoop.hbase.mapreduce.Driver importtsv > -Dimporttsv.columns=HBASE_ROW_KEY,family1:column1,family1:column4,family1:column3 > test:example example/ -libjars $(hbase mapredcp) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-21140) Backport 'HBASE-21136 NPE in MetricsTableSourceImpl.updateFlushTime' to branch-1 . (and backport HBASE-15728 for branch-1)
[ https://issues.apache.org/jira/browse/HBASE-21140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reopened HBASE-21140: --- > Backport 'HBASE-21136 NPE in MetricsTableSourceImpl.updateFlushTime' to > branch-1 . (and backport HBASE-15728 for branch-1) > --- > > Key: HBASE-21140 > URL: https://issues.apache.org/jira/browse/HBASE-21140 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Duo Zhang >Priority: Major > Attachments: > HBASE-21140.diff_against_cf198a65e8d704d28538c4c165a941b9e5bac678.branch-1.001.patch > > > There is no computeIfAbsent method on branch-1 as we still need to support > JDK7, so the fix will be different with branch-2+. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-21140) Backport 'HBASE-21136 NPE in MetricsTableSourceImpl.updateFlushTime' to branch-1 . (and backport HBASE-15728 for branch-1)
[ https://issues.apache.org/jira/browse/HBASE-21140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-21140. --- Resolution: Won't Fix > Backport 'HBASE-21136 NPE in MetricsTableSourceImpl.updateFlushTime' to > branch-1 . (and backport HBASE-15728 for branch-1) > --- > > Key: HBASE-21140 > URL: https://issues.apache.org/jira/browse/HBASE-21140 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Duo Zhang >Priority: Major > Attachments: > HBASE-21140.diff_against_cf198a65e8d704d28538c4c165a941b9e5bac678.branch-1.001.patch > > > There is no computeIfAbsent method on branch-1 as we still need to support > JDK7, so the fix will be different with branch-2+. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-21904) TestSimpleRpcScheduler is still flaky
[ https://issues.apache.org/jira/browse/HBASE-21904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-21904. --- Resolution: Won't Fix Rssolved as won't fix. > TestSimpleRpcScheduler is still flaky > - > > Key: HBASE-21904 > URL: https://issues.apache.org/jira/browse/HBASE-21904 > Project: HBase > Issue Type: Test > Components: test >Affects Versions: 1.5.0 >Reporter: Andrew Kyle Purtell >Priority: Major > Labels: branch-1 > Attachments: > org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler-output.txt > > > Flaky wait condition, unclear if it's the wait condition or the underlying > functionality that is the problem. > [ERROR] > testSoftAndHardQueueLimits(org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler) > Time elapsed: 0.228 s <<< FAILURE! > java.lang.AssertionError > at > org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler.testSoftAndHardQueueLimits(TestSimpleRpcScheduler.java:380) -- This message was sent by Atlassian Jira (v8.20.10#820010)