Re: [PR] HBASE-28427 FNFE related to 'master:store' when moving archived hfiles to global archived dir [hbase]
Apache-HBase commented on PR #5756: URL: https://github.com/apache/hbase/pull/5756#issuecomment-1990900802 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 13s | master passed | | +1 :green_heart: | compile | 0m 52s | master passed | | +1 :green_heart: | shadedjars | 5m 35s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 55s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | +1 :green_heart: | shadedjars | 5m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 202m 56s | hbase-server in the patch passed. | | | | 229m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5756/3/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5756 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d20d73f4faca 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-17.0.10+7 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5756/3/testReport/ | | Max. process+thread count | 5444 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5756/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825534#comment-17825534 ] Hudson commented on HBASE-28401: Results for branch branch-2 [build #1010 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1010/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1010/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1010/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1010/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1010/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28438) Add support spitting region into multiple regions
Rajeshbabu Chintaguntla created HBASE-28438: --- Summary: Add support spitting region into multiple regions Key: HBASE-28438 URL: https://issues.apache.org/jira/browse/HBASE-28438 Project: HBase Issue Type: Improvement Reporter: Rajeshbabu Chintaguntla Assignee: Rajeshbabu Chintaguntla We have a requirement of splitting one region into multiple hundreds of regions at a time distribute load hot data. To do that we need split a region and wait for the completion of it and then again split the two regions etc..which is time consuming activity. Would be better to support splitting region into multiple regions more than two so that in single operation we can split the region. Todo that we need to take care 1)Supporting admin APIs to take multiple split keys 2)Implement new procedure to create new regions, creating meta entries and udpating them to meta 3) close the parent region and open split regions. 4) Update the compaction of post split and readers also to use the portion store file reader based on the range to scan than half store reader. 5) make sure the catalog jonitor also cleaning the parent regions when there are all the regions split properly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825512#comment-17825512 ] Hudson commented on HBASE-28401: Results for branch branch-2.6 [build #74 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/74/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/74/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/74/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/74/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/74/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28427 FNFE related to 'master:store' when moving archived hfiles to global archived dir [hbase]
Apache-HBase commented on PR #5756: URL: https://github.com/apache/hbase/pull/5756#issuecomment-1990037117 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 45s | master passed | | +1 :green_heart: | compile | 2m 26s | master passed | | +1 :green_heart: | checkstyle | 0m 35s | master passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 30s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 44s | the patch passed | | +1 :green_heart: | compile | 2m 26s | the patch passed | | +1 :green_heart: | javac | 2m 26s | the patch passed | | +1 :green_heart: | checkstyle | 0m 36s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 4m 51s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 42s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 27m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5756/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5756 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux bee0e47cb3a7 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 81 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5756/3/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28428) ConnectionRegistry APIs should have timeout
[ https://issues.apache.org/jira/browse/HBASE-28428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825489#comment-17825489 ] Viraj Jasani commented on HBASE-28428: -- Yes, we have plans to migrate to MasterRegistry (RpcConnectionRegistry). However, as of today, we do have zookeeper timeouts, maybe not quite aggressive. The timeout for CompletableFuture based connection registry APIs would be very useful, in case somehow the client thread gets stuck due to any network or os level issues. The idea here is to provide timeout to Future#get for all connection registry APIs. > ConnectionRegistry APIs should have timeout > --- > > Key: HBASE-28428 > URL: https://issues.apache.org/jira/browse/HBASE-28428 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.8 >Reporter: Viraj Jasani >Assignee: Lokesh Khurana >Priority: Major > > Came across a couple of instances where active master failover happens around > the same time as Zookeeper leader failover, leading to stuck HBase client if > one of the threads is blocked on one of the ConnectionRegistry rpc calls. > ConnectionRegistry APIs are wrapped with CompletableFuture. However, their > usages do not have any timeouts, which can potentially lead to the entire > client in stuck state indefinitely as we take some global locks. For > instance, _getKeepAliveMasterService()_ takes > {_}masterLock{_}, hence if getting active master from _masterAddressZNode_ > gets stuck, we can block any admin operation that needs > {_}getKeepAliveMasterService(){_}. > > Sample stacktrace that blocked all client operations that required table > descriptor from Admin: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.get > org.apache.hadoop.hbase.client.ConnectionImplementation.access$? > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub > org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService > org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster > org.apache.hadoop.hbase.client.MasterCallable.prepare > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable > org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor > org.apache.hadoop.hbase.client.HTable.getDescriptororg.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor > org.apache.phoenix.query.DelegateConnectionQueryServices.getTableDescriptor > org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled > org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations > org.apache.phoenix.execute.MutationState.sendBatch > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.commit > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.call.CallRunner.run > org.apache.phoenix.jdbc.PhoenixConnection.commit {code} > Another similar incident is captured on PHOENIX-7233. In this case, > retrieving clusterId from ZNode got stuck and that blocked client from being > able to create any more HBase Connection. Stacktrace for referece: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId > org.apache.hadoop.hbase.client.ConnectionImplementation. > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance? > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance > jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance > java.lang.reflect.Constructor.newInstance > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$? > org.apache.hadoop.hbase.client.ConnectionFactory$$Lambda$?.run > java.security.AccessController.doPrivileged > javax.security.auth.Subject.doAs > org.apache.hadoop.security.UserGroupInformation.doAs > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection > org.apache.hadoop.hbase.client.ConnectionFactory.createC
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989710732 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 49s | master passed | | +1 :green_heart: | compile | 0m 51s | master passed | | +1 :green_heart: | shadedjars | 5m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 58s | the patch passed | | +1 :green_heart: | compile | 0m 54s | the patch passed | | +1 :green_heart: | javac | 0m 54s | the patch passed | | +1 :green_heart: | shadedjars | 5m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 232m 2s | hbase-server in the patch passed. | | | | 256m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 467273a6f0b3 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/testReport/ | | Max. process+thread count | 5259 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989687242 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 5s | master passed | | +1 :green_heart: | compile | 0m 52s | master passed | | +1 :green_heart: | shadedjars | 5m 27s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 51s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | +1 :green_heart: | shadedjars | 5m 26s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 203m 26s | hbase-server in the patch passed. | | | | 227m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux cde69b120c06 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-17.0.10+7 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/testReport/ | | Max. process+thread count | 5088 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
Apache-HBase commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989655522 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 52s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 55s | branch-2 passed | | +1 :green_heart: | compile | 2m 9s | branch-2 passed | | +1 :green_heart: | shadedjars | 8m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 25s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 1m 58s | the patch passed | | +1 :green_heart: | javac | 1m 58s | the patch passed | | +1 :green_heart: | shadedjars | 8m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 53s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 17s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 212m 0s | hbase-server in the patch passed. | | | | 255m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5762 | | JIRA Issue | HBASE-28260 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7839206b407d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / af94d17d77 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/testReport/ | | Max. process+thread count | 4495 (vs. ulimit of 3) | | modules | C: hbase-common hbase-asyncfs hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
Apache-HBase commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989643945 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 42s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 44s | branch-2 passed | | +1 :green_heart: | compile | 1m 8s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 20s | the patch passed | | +1 :green_heart: | compile | 1m 11s | the patch passed | | +1 :green_heart: | javac | 1m 11s | the patch passed | | +1 :green_heart: | shadedjars | 5m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 16s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 216m 0s | hbase-server in the patch passed. | | | | 244m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5762 | | JIRA Issue | HBASE-28260 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5d0405b99f3d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / af94d17d77 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/testReport/ | | Max. process+thread count | 4253 (vs. ulimit of 3) | | modules | C: hbase-common hbase-asyncfs hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28437) Region Server crash in our production environment.
[ https://issues.apache.org/jira/browse/HBASE-28437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825468#comment-17825468 ] Rushabh Shah commented on HBASE-28437: -- Just before the crash, saw the following logline from thread: RS-EventLoopGroup-1-92 {noformat} 2024-03-08 16:50:35,401 WARN [RS-EventLoopGroup-1-92] client.AsyncBatchRpcRetryingCaller - Process batch for [''] in from failed, tries=11 org.apache.hadoop.hbase.RegionTooBusyException: org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=1.0 G, regionName=16226d6d057ca2d10b0377e6596ff291, server= at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4969) at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4438) at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) at sun.reflect.GeneratedConstructorAccessor162.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1606) at org.apache.hadoop.hbase.shaded.protobuf.ResponseConverter.getResults(ResponseConverter.java:189) at org.apache.hadoop.hbase.client.AsyncBatchRpcRetryingCaller.lambda$sendToServer$11(AsyncBatchRpcRetryingCaller.java:410) at org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:70) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:396) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:92) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:425) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:420) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:114) at org.apache.hadoop.hbase.ipc.Call.setResponse(Call.java:146) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.finishCall(NettyRpcDuplexHandler.java:157) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:201) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:218) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty
[jira] [Created] (HBASE-28437) Region Server crash in our production environment.
Rushabh Shah created HBASE-28437: Summary: Region Server crash in our production environment. Key: HBASE-28437 URL: https://issues.apache.org/jira/browse/HBASE-28437 Project: HBase Issue Type: Bug Reporter: Rushabh Shah Recently we are seeing lot of RS crash in our production environment creating core dump file and hs_err_pid.log file. HBase: hbase-2.5 Java: openjdk 1.8 Copying contents from hs_err_pid.log below: {noformat} # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x7f9fb1415ba2, pid=50172, tid=0x7f92a97ec700 # # JRE version: OpenJDK Runtime Environment (Zulu 8.76.0.18-SA-linux64) (8.0_402-b06) (build 1.8.0_402-b06) # Java VM: OpenJDK 64-Bit Server VM (25.402-b06 mixed mode linux-amd64 ) # Problematic frame: # J 19801 C2 org.apache.hadoop.hbase.util.ByteBufferUtils.copyBufferToStream(Ljava/io/OutputStream;Ljava/nio/ByteBuffer;II)V (75 bytes) @ 0x7f9fb1415ba2 [0x7f9fb14159a0+0x202] # # Core dump written. Default location: /home/sfdc/core or core.50172 # # If you would like to submit a bug report, please visit: # http://www.azul.com/support/ # --- T H R E A D --- Current thread (0x7f9fa2d13000): JavaThread "RS-EventLoopGroup-1-92" daemon [_thread_in_Java, id=54547, stack(0x7f92a96ec000,0x7f92a97ed000)] siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x559869daf000 Registers: RAX=0x7f9dbd8b6460, RBX=0x0008, RCX=0x0005c86b, RDX=0x7f9dbd8b6460 RSP=0x7f92a97eaf20, RBP=0x0002, RSI=0x7f92d225e970, RDI=0x0069 R8 =0x55986975f028, R9 =0x0064ffd8, R10=0x005f, R11=0x7f94a778b290 R12=0x7f9e62855ae8, R13=0x, R14=0x7f9e5a14b1e0, R15=0x7f9fa2d13000 RIP=0x7f9fb1415ba2, EFLAGS=0x00010216, CSGSFS=0x0033, ERR=0x0004 TRAPNO=0x000e Top of Stack: (sp=0x7f92a97eaf20) 0x7f92a97eaf20: 00690064ff79 7f9dbd8b6460 0x7f92a97eaf30: 7f9dbd8b6460 00570003 0x7f92a97eaf40: 7f94a778b290 000400010004 0x7f92a97eaf50: 0004d090c130 7f9db550 0x7f92a97eaf60: 000800040001 7f92a97eaf90 0x7f92a97eaf70: 7f92d0908648 0001 0x7f92a97eaf80: 0001 005c 0x7f92a97eaf90: 7f94ee8078d0 0206 0x7f92a97eafa0: 7f9db5545a00 7f9fafb63670 0x7f92a97eafb0: 7f9e5a13ed70 00690001 0x7f92a97eafc0: 7f93ab8965b8 7f93b9959210 0x7f92a97eafd0: 7f9db5545a00 7f9fb04b3e30 0x7f92a97eafe0: 7f9e5a13ed70 7f930001 0x7f92a97eaff0: 7f93ab8965b8 7f93a8ae3920 0x7f92a97eb000: 7f93b9959210 7f94a778b290 0x7f92a97eb010: 7f9b60707c20 7f93a8938c28 0x7f92a97eb020: 7f94ee8078d0 7f9b60708608 0x7f92a97eb030: 7f9b60707bc0 7f9b60707c20 0x7f92a97eb040: 0069 7f93ab8965b8 0x7f92a97eb050: 7f94a778b290 7f94a778b290 0x7f92a97eb060: 0005c80d0005c80c a828a590 0x7f92a97eb070: 7f9e5a13ed70 0001270e 0x7f92a97eb080: 7f9db5545790 01440022 0x7f92a97eb090: 7f95ddc800c0 7f93ab89a6c8 0x7f92a97eb0a0: 7f93ae65c270 7f9fb24af990 0x7f92a97eb0b0: 7f93ae65c290 7f93ae65c270 0x7f92a97eb0c0: 7f9e5a13ed70 7f92ca328528 0x7f92a97eb0d0: 7f9e5a13ed98 7f9e5e1e88b0 0x7f92a97eb0e0: 7f92ca32d870 7f9e5a13ed98 0x7f92a97eb0f0: 7f9e5e1e88b0 7f93b9956288 0x7f92a97eb100: 7f9e5a13ed70 7f9fb23c3aac 0x7f92a97eb110: 7f9317c9c8d0 7f9b60708608 Instructions: (pc=0x7f9fb1415ba2) 0x7f9fb1415b82: 44 3b d7 0f 8d 6d fe ff ff 4c 8b 40 10 45 8b ca 0x7f9fb1415b92: 44 03 0c 24 c4 c1 f9 7e c3 4d 8b 5b 18 4d 63 c9 0x7f9fb1415ba2: 47 0f be 04 08 4d 85 db 0f 84 49 03 00 00 4d 8b 0x7f9fb1415bb2: 4b 08 48 b9 10 1c be 10 93 7f 00 00 4c 3b c9 0f Register to memory mapping: RAX=0x7f9dbd8b6460 is an oop java.nio.DirectByteBuffer - klass: 'java/nio/DirectByteBuffer' RBX=0x0008 is an unknown value RCX=0x0005c86b is an unknown value RDX=0x7f9dbd8b6460 is an oop java.nio.DirectByteBuffer - klass: 'java/nio/DirectByteBuffer' RSP=0x7f92a97eaf20 is pointing into the stack for thread: 0x7f9fa2d13000 RBP=0x0002 is an unknown value RSI=0x7f92d225e970 is pointing into metadata RDI=0x0069 is an unknown value R8 =0x55986975f028 is an unknown value R9 =0x0064ffd8 is an unknown value R10=0x005f is an unknown value R11=0x7f94a778b290 is an oop org.apache.hbase.thirdparty.io.netty.buffer.PooledUnsafeDirectByteBuf - klass:
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989449918 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 38s | master passed | | +1 :green_heart: | compile | 2m 53s | master passed | | +1 :green_heart: | checkstyle | 0m 43s | master passed | | +1 :green_heart: | spotless | 0m 50s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 45s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 44s | the patch passed | | +1 :green_heart: | compile | 2m 59s | the patch passed | | +1 :green_heart: | javac | 2m 59s | the patch passed | | +1 :green_heart: | checkstyle | 0m 42s | hbase-server: The patch generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 5m 46s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 49s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 2m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 34m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 5f34e8757965 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 80 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989444583 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 26s | master passed | | +1 :green_heart: | compile | 0m 42s | master passed | | +1 :green_heart: | shadedjars | 5m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 30s | the patch passed | | +1 :green_heart: | compile | 0m 42s | the patch passed | | +1 :green_heart: | javac | 0m 42s | the patch passed | | +1 :green_heart: | shadedjars | 5m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 11m 9s | hbase-server in the patch failed. | | | | 30m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d74af2398ebd 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/testReport/ | | Max. process+thread count | 1565 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/7/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989382791 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 29s | master passed | | +1 :green_heart: | compile | 1m 6s | master passed | | +1 :green_heart: | shadedjars | 6m 7s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 12s | the patch passed | | +1 :green_heart: | compile | 0m 58s | the patch passed | | +1 :green_heart: | javac | 0m 58s | the patch passed | | +1 :green_heart: | shadedjars | 6m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 12m 4s | hbase-server in the patch failed. | | | | 36m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3cd7a9a3860f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-17.0.10+7 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/testReport/ | | Max. process+thread count | 1796 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989379792 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 53s | master passed | | +1 :green_heart: | compile | 0m 50s | master passed | | +1 :green_heart: | shadedjars | 5m 16s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 52s | the patch passed | | +1 :green_heart: | compile | 0m 51s | the patch passed | | +1 :green_heart: | javac | 0m 51s | the patch passed | | +1 :green_heart: | shadedjars | 5m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 14m 5s | hbase-server in the patch failed. | | | | 35m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6fa0c821ee0d 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-11.0.17+8 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/testReport/ | | Max. process+thread count | 1766 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28395) TableNotFoundException when executing 'hbase hbck'
[ https://issues.apache.org/jira/browse/HBASE-28395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825434#comment-17825434 ] Hudson commented on HBASE-28395: Results for branch master [build #1031 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TableNotFoundException when executing 'hbase hbck' > -- > > Key: HBASE-28395 > URL: https://issues.apache.org/jira/browse/HBASE-28395 > Project: HBase > Issue Type: Bug > Components: hbck > Environment: hbase master >Reporter: guluo >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > *Reproduction:* > Starting hbase cluster > Executing command 'hbase hbck' > TableNotFoundException occur aboubt '{color:#FF}hbase:replication{color}' > {code:java} > //代码占位符 > org.apache.hadoop.hbase.replication.ReplicationException: failed to > listAllQueues > at > org.apache.hadoop.hbase.replication.TableReplicationQueueStorage.listAllQueues(TableReplicationQueueStorage.java:289) > at > org.apache.hadoop.hbase.util.hbck.ReplicationChecker.getUnDeletedQueues(ReplicationChecker.java:77) > at > org.apache.hadoop.hbase.util.hbck.ReplicationChecker.checkUnDeletedQueues(ReplicationChecker.java:105) > at > org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixReplication(HBaseFsck.java:2575) > at > org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:803) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3752) > at > org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3551) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97) > at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3536) > Caused by: org.apache.hadoop.hbase.TableNotFoundException: hbase:replication > {code} > *Problem analysis:* > hbase does not create the table 'hbase:replication' until the first add_peer > operation would be executed. > However, 'hbase hbck' operation would check the 'hbase:replication', So if > this table has not been created, we would get TableNotFoundException. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825433#comment-17825433 ] Hudson commented on HBASE-28401: Results for branch master [build #1031 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1031/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989376583 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 54s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 3s | master passed | | +1 :green_heart: | compile | 2m 30s | master passed | | +1 :green_heart: | checkstyle | 0m 36s | master passed | | +1 :green_heart: | spotless | 0m 53s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 52s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 50s | the patch passed | | +1 :green_heart: | compile | 3m 17s | the patch passed | | +1 :green_heart: | javac | 3m 17s | the patch passed | | +1 :green_heart: | checkstyle | 0m 43s | hbase-server: The patch generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 5m 48s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 45s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 50s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 34m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux adde281d546a 5.4.0-169-generic #187-Ubuntu SMP Thu Nov 23 14:52:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 81 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28385 make Scan estimates more realistic [hbase]
Apache-HBase commented on PR #5713: URL: https://github.com/apache/hbase/pull/5713#issuecomment-1989365618 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 32s | master passed | | +1 :green_heart: | compile | 0m 43s | master passed | | +1 :green_heart: | shadedjars | 5m 5s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 28s | the patch passed | | +1 :green_heart: | compile | 0m 42s | the patch passed | | +1 :green_heart: | javac | 0m 42s | the patch passed | | +1 :green_heart: | shadedjars | 5m 7s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 11m 6s | hbase-server in the patch failed. | | | | 30m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5713 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 49c34d87e2fe 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 34b738d2ac | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/testReport/ | | Max. process+thread count | 1567 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5713/6/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
Apache-HBase commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989351370 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 16s | branch-2 passed | | +1 :green_heart: | compile | 3m 14s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 55s | branch-2 passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 2m 17s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 57s | the patch passed | | +1 :green_heart: | compile | 4m 3s | the patch passed | | +1 :green_heart: | javac | 4m 3s | the patch passed | | +1 :green_heart: | checkstyle | 1m 8s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 26s | Patch does not cause any errors with Hadoop 2.10.2 or 3.3.6. | | +1 :green_heart: | spotless | 0m 54s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 3m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 38m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5762 | | JIRA Issue | HBASE-28260 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 263e4bccd713 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / af94d17d77 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 80 (vs. ulimit of 3) | | modules | C: hbase-common hbase-asyncfs hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5762/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
charlesconnell commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989281046 Sounds good. Feel free to merge this when ready. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
bbeaudreault commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989276225 branch-3 was a clean cherry-pick and the UTs all passed locally. Happy to use your branch here if you prefer for branch-2. Either way, I will cherry-pick to branch-2.6 once merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
charlesconnell commented on PR #5762: URL: https://github.com/apache/hbase/pull/5762#issuecomment-1989260309 Looks fine, but I also had these branches going: https://github.com/HubSpot/hbase/tree/HBASE-28260/branch-3/wal-avoid-local-writes, https://github.com/HubSpot/hbase/tree/HBASE-28260/branch-2/wal-avoid-local-writes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
bbeaudreault opened a new pull request, #5762: URL: https://github.com/apache/hbase/pull/5762 Contributed-by: Charles Connell FYI @charlesconnell, I had to fix some merge conflicts here so doing it as a PR to let the whole test suite run -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825419#comment-17825419 ] Hudson commented on HBASE-28401: Results for branch branch-3 [build #165 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28395) TableNotFoundException when executing 'hbase hbck'
[ https://issues.apache.org/jira/browse/HBASE-28395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825420#comment-17825420 ] Hudson commented on HBASE-28395: Results for branch branch-3 [build #165 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/165/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TableNotFoundException when executing 'hbase hbck' > -- > > Key: HBASE-28395 > URL: https://issues.apache.org/jira/browse/HBASE-28395 > Project: HBase > Issue Type: Bug > Components: hbck > Environment: hbase master >Reporter: guluo >Assignee: guluo >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > *Reproduction:* > Starting hbase cluster > Executing command 'hbase hbck' > TableNotFoundException occur aboubt '{color:#FF}hbase:replication{color}' > {code:java} > //代码占位符 > org.apache.hadoop.hbase.replication.ReplicationException: failed to > listAllQueues > at > org.apache.hadoop.hbase.replication.TableReplicationQueueStorage.listAllQueues(TableReplicationQueueStorage.java:289) > at > org.apache.hadoop.hbase.util.hbck.ReplicationChecker.getUnDeletedQueues(ReplicationChecker.java:77) > at > org.apache.hadoop.hbase.util.hbck.ReplicationChecker.checkUnDeletedQueues(ReplicationChecker.java:105) > at > org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixReplication(HBaseFsck.java:2575) > at > org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:803) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3752) > at > org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3551) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97) > at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3536) > Caused by: org.apache.hadoop.hbase.TableNotFoundException: hbase:replication > {code} > *Problem analysis:* > hbase does not create the table 'hbase:replication' until the first add_peer > operation would be executed. > However, 'hbase hbck' operation would check the 'hbase:replication', So if > this table has not been created, we would get TableNotFoundException. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28427 FNFE related to 'master:store' when moving archived hfiles to global archived dir [hbase]
guluo2016 commented on PR #5756: URL: https://github.com/apache/hbase/pull/5756#issuecomment-1988595659 The update has been made. Seems tests have passed ...? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825322#comment-17825322 ] Duo Zhang commented on HBASE-28436: --- OK, got it. I think we can first support hbase+zk and hbase+rpc, and then try to add other things to the URL, like authority and configurations. Thanks. > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825317#comment-17825317 ] Istvan Toth commented on HBASE-28436: - Yes, but it is needed for backwards compatibility reasons. Phoenix has always supported specifying the ZK parameters in the URL, the explicit protocol variants were only introduced recently. This way things work as usual (as long ZK registry is the default in HBase) Of course users will have to either change the default registry or the phoenix URL for Hbase 3.0. > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825316#comment-17825316 ] Duo Zhang commented on HBASE-28436: --- Out of interest, is it really useful to support the 'jdbc:phoenix: protocol variant' format? If you want to specify the path component, you should have already known whether it is a zookeeper address or a rpc address... > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825294#comment-17825294 ] Duo Zhang commented on HBASE-28436: --- In the first version, we will still get all the configurations from the Configuration instance. We could also add the support for specifying configuration via url qurty parameters. > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825292#comment-17825292 ] Duo Zhang commented on HBASE-28436: --- I propose we just use the way described in the email to define the protocol. hbase:zookeeper://host:port(,host:port)*/, is for zookeeper based connection registry hbase:rpc://host:port(,host:port)*, is for rpc based connection registry For MasterRegistry, I do not think we still need to support it here, as we will still support the old way to specify it. As describe in the parent issue, we could use ServiceLoader to load different connection registry implementations. More specific, I think we could introduce a ConnectionRegistryFactory base interface(it is already there but since it is IA.Private, we could rename the old class...), which have two methods, one is the schema of this connection registry implementation, the other is a create method to create the actual ConnectionRegistry. We will introduce a new configuration for specifying the connection url, if it is not present, we fallback to use the old way. And when creating ConnectionRegistry, we first check its scheme, and choose the correct ConnectionRegistryFactory to create the ConnectionRegistry. If there is no schema, we will fallback to use the old way. Thanks. > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825302#comment-17825302 ] Istvan Toth commented on HBASE-28436: - For reference, these are the protcol names used in Phoenix: https://phoenix.apache.org/classpath_and_url.html It would be easier for users if the variants used the same names (or at least accepted them as aliases). > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28260: Add NO_WRITE_LOCAL flag to WAL file creation [hbase]
bbeaudreault merged PR #5733: URL: https://github.com/apache/hbase/pull/5733 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28436: -- Component/s: Client > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-28436: - Assignee: Duo Zhang > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HBASE-28436) Use connection url to specify the connection registry information
[ https://issues.apache.org/jira/browse/HBASE-28436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-28436 started by Duo Zhang. - > Use connection url to specify the connection registry information > - > > Key: HBASE-28436 > URL: https://issues.apache.org/jira/browse/HBASE-28436 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > As describe in this email from [~ndimiduk] > https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh > The first advantage here is that, we can encode the connection registry > implementation in the scheme of the connection url, so for replication, we > can now support cluster key other than zookeeper, which is important for us > to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28428) ConnectionRegistry APIs should have timeout
[ https://issues.apache.org/jira/browse/HBASE-28428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825280#comment-17825280 ] Duo Zhang commented on HBASE-28428: --- Oh, you are still using zookeeper based connection registry, then I think the problem here is you need to have timeout settings for zookeeper operations? > ConnectionRegistry APIs should have timeout > --- > > Key: HBASE-28428 > URL: https://issues.apache.org/jira/browse/HBASE-28428 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.8 >Reporter: Viraj Jasani >Assignee: Lokesh Khurana >Priority: Major > > Came across a couple of instances where active master failover happens around > the same time as Zookeeper leader failover, leading to stuck HBase client if > one of the threads is blocked on one of the ConnectionRegistry rpc calls. > ConnectionRegistry APIs are wrapped with CompletableFuture. However, their > usages do not have any timeouts, which can potentially lead to the entire > client in stuck state indefinitely as we take some global locks. For > instance, _getKeepAliveMasterService()_ takes > {_}masterLock{_}, hence if getting active master from _masterAddressZNode_ > gets stuck, we can block any admin operation that needs > {_}getKeepAliveMasterService(){_}. > > Sample stacktrace that blocked all client operations that required table > descriptor from Admin: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.get > org.apache.hadoop.hbase.client.ConnectionImplementation.access$? > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub > org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService > org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster > org.apache.hadoop.hbase.client.MasterCallable.prepare > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable > org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor > org.apache.hadoop.hbase.client.HTable.getDescriptororg.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor > org.apache.phoenix.query.DelegateConnectionQueryServices.getTableDescriptor > org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled > org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations > org.apache.phoenix.execute.MutationState.sendBatch > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.commit > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.call.CallRunner.run > org.apache.phoenix.jdbc.PhoenixConnection.commit {code} > Another similar incident is captured on PHOENIX-7233. In this case, > retrieving clusterId from ZNode got stuck and that blocked client from being > able to create any more HBase Connection. Stacktrace for referece: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId > org.apache.hadoop.hbase.client.ConnectionImplementation. > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance? > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance > jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance > java.lang.reflect.Constructor.newInstance > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$? > org.apache.hadoop.hbase.client.ConnectionFactory$$Lambda$?.run > java.security.AccessController.doPrivileged > javax.security.auth.Subject.doAs > org.apache.hadoop.security.UserGroupInformation.doAs > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection > org.apache.hadoop.hbase.client.ConnectionFactory.createConnectionorg.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection > org.apache.phoenix.query.ConnectionQueryServicesImpl.access$? > org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call > org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call
[jira] [Commented] (HBASE-28428) ConnectionRegistry APIs should have timeout
[ https://issues.apache.org/jira/browse/HBASE-28428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825279#comment-17825279 ] Duo Zhang commented on HBASE-28428: --- I think the correct way here is to introduce seprated config for retry/timeout related configs for connection registry, like what we have for meta operations. > ConnectionRegistry APIs should have timeout > --- > > Key: HBASE-28428 > URL: https://issues.apache.org/jira/browse/HBASE-28428 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.8 >Reporter: Viraj Jasani >Assignee: Lokesh Khurana >Priority: Major > > Came across a couple of instances where active master failover happens around > the same time as Zookeeper leader failover, leading to stuck HBase client if > one of the threads is blocked on one of the ConnectionRegistry rpc calls. > ConnectionRegistry APIs are wrapped with CompletableFuture. However, their > usages do not have any timeouts, which can potentially lead to the entire > client in stuck state indefinitely as we take some global locks. For > instance, _getKeepAliveMasterService()_ takes > {_}masterLock{_}, hence if getting active master from _masterAddressZNode_ > gets stuck, we can block any admin operation that needs > {_}getKeepAliveMasterService(){_}. > > Sample stacktrace that blocked all client operations that required table > descriptor from Admin: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.get > org.apache.hadoop.hbase.client.ConnectionImplementation.access$? > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStubNoRetries > org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub > org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService > org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster > org.apache.hadoop.hbase.client.MasterCallable.prepare > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries > org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable > org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor > org.apache.hadoop.hbase.client.HTable.getDescriptororg.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor > org.apache.phoenix.query.DelegateConnectionQueryServices.getTableDescriptor > org.apache.phoenix.util.IndexUtil.isGlobalIndexCheckerEnabled > org.apache.phoenix.execute.MutationState.filterIndexCheckerMutations > org.apache.phoenix.execute.MutationState.sendBatch > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.send > org.apache.phoenix.execute.MutationState.commit > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.jdbc.PhoenixConnection$?.call > org.apache.phoenix.call.CallRunner.run > org.apache.phoenix.jdbc.PhoenixConnection.commit {code} > Another similar incident is captured on PHOENIX-7233. In this case, > retrieving clusterId from ZNode got stuck and that blocked client from being > able to create any more HBase Connection. Stacktrace for referece: > {code:java} > jdk.internal.misc.Unsafe.park > java.util.concurrent.locks.LockSupport.park > java.util.concurrent.CompletableFuture$Signaller.block > java.util.concurrent.ForkJoinPool.managedBlock > java.util.concurrent.CompletableFuture.waitingGet > java.util.concurrent.CompletableFuture.get > org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId > org.apache.hadoop.hbase.client.ConnectionImplementation. > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance? > jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance > jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance > java.lang.reflect.Constructor.newInstance > org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$? > org.apache.hadoop.hbase.client.ConnectionFactory$$Lambda$?.run > java.security.AccessController.doPrivileged > javax.security.auth.Subject.doAs > org.apache.hadoop.security.UserGroupInformation.doAs > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection > org.apache.hadoop.hbase.client.ConnectionFactory.createConnectionorg.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection > org.apache.phoenix.query.ConnectionQueryServicesImpl.access$? > org.apache.phoenix.query.ConnectionQueryServicesImpl$?.call > org.apache.phoenix.query.ConnectionQueryServicesImpl$?.
[jira] [Created] (HBASE-28436) Use connection url to specify the connection registry information
Duo Zhang created HBASE-28436: - Summary: Use connection url to specify the connection registry information Key: HBASE-28436 URL: https://issues.apache.org/jira/browse/HBASE-28436 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang As describe in this email from [~ndimiduk] https://lists.apache.org/thread/98wqlkqvlnmpx3r7yrg9mw4pqz9ppofh The first advantage here is that, we can encode the connection registry implementation in the scheme of the connection url, so for replication, we can now support cluster key other than zookeeper, which is important for us to remove zookeeper dependency on our public facing APIs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28401. --- Fix Version/s: 2.6.0 3.0.0-beta-2 2.5.9 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.5+. Thanks [~bbeaudreault] for reviewing! > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27826) Region split and merge time while offline is O(n) with respect to number of store files
[ https://issues.apache.org/jira/browse/HBASE-27826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825243#comment-17825243 ] Duo Zhang commented on HBASE-27826: --- Good. I think we can work together. In general, I do not think we even need a migration, the current architecture is enough to support the new requirements. Thanks. > Region split and merge time while offline is O(n) with respect to number of > store files > --- > > Key: HBASE-27826 > URL: https://issues.apache.org/jira/browse/HBASE-27826 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.4 >Reporter: Andrew Kyle Purtell >Priority: Major > > This is a significant availability issue when HFiles are on S3. = > HBASE-26079 ({_}Use StoreFileTracker when splitting and merging{_}) changed > the split and merge table procedure implementations to indirect through the > StoreFileTracker implementation when selecting HFiles to be merged or split, > rather than directly listing those using file system APIs. It also changed > the commit logic in HRegionFileSystem to add the link/ref files on resulting > split or merged regions to the StoreFileTracker. However, the creation of a > link file is still a filesystem operation and creating a “file” on S3 can > take well over a second. If, for example there are 20 store files in a > region, which is not uncommon, after the region is taken offline for a split > (or merge) it may require more than 20 seconds to create the link files > before the results can be brought back online, creating a severe availability > problem. Splits and merges are supposed to be fast, completing in less than a > second, certainly less than a few seconds. This has been true when HFiles are > stored on HDFS only because file creation operations there are nearly > instantaneous. > There are two issues but both can be handled with modifications to the store > file tracker interface and the file based store file tracker implementation. > When the file based store file file tracker is enabled the HFile links should > be virtual entities that only exist in the file manifest. We do not require > physical files in the filesystem to serve as links now. That is the magic of > the this file tracker, the manifest file replaces requirements to list the > filesystem. > Then, when splitting or merging, the HFile links should be collected into a > list and committed in one batch using a new FILE file tracker interface, > requiring only one update of the manifest file in S3, bringing the time > requirement for this operation to O(1) down from O[n]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28401 Introduce a close method for memstore for release active … [hbase]
Apache9 merged PR #5761: URL: https://github.com/apache/hbase/pull/5761 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-27126) Support multi-threads cleaner for MOB files
[ https://issues.apache.org/jira/browse/HBASE-27126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825210#comment-17825210 ] Xiaolin Ha commented on HBASE-27126: Hi [~chandrasekhar.k] , please feel free to take it. > Support multi-threads cleaner for MOB files > --- > > Key: HBASE-27126 > URL: https://issues.apache.org/jira/browse/HBASE-27126 > Project: HBase > Issue Type: Improvement > Components: mob >Affects Versions: 2.4.12 >Reporter: Xiaolin Ha >Priority: Major > Fix For: 3.0.0-beta-2 > > > Just like the muti-threads in hfile cleaner. > When there are many tables has MOB files, only one thread for cleaning them > is not enough. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27126) Support multi-threads cleaner for MOB files
[ https://issues.apache.org/jira/browse/HBASE-27126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaolin Ha reassigned HBASE-27126: -- Assignee: Chandra Sekhar K > Support multi-threads cleaner for MOB files > --- > > Key: HBASE-27126 > URL: https://issues.apache.org/jira/browse/HBASE-27126 > Project: HBase > Issue Type: Improvement > Components: mob >Affects Versions: 2.4.12 >Reporter: Xiaolin Ha >Assignee: Chandra Sekhar K >Priority: Major > Fix For: 3.0.0-beta-2 > > > Just like the muti-threads in hfile cleaner. > When there are many tables has MOB files, only one thread for cleaning them > is not enough. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28424) Set correct Result to RegionActionResult for successful Put/Delete mutations
[ https://issues.apache.org/jira/browse/HBASE-28424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825177#comment-17825177 ] Hudson commented on HBASE-28424: Results for branch branch-2 [build #1009 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1009/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1009/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1009/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1009/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1009/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Set correct Result to RegionActionResult for successful Put/Delete mutations > > > Key: HBASE-28424 > URL: https://issues.apache.org/jira/browse/HBASE-28424 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Jing Yu >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9 > > > While returning response of multi(), RSRpcServices build the > RegionActionResult with Result or Exception (ClientProtos.ResultOrException). > It sets the Exception to this class in all cases where the operation fails > with corresponding exception types e.g. NoSuchColumnFamilyException or > FailedSanityCheckException etc. > In case of atomic mutations Increment and Append, we add the Result object to > ClientProtos.ResultOrException, which is used by client to retrieve result > from the batch API: {_}Table#batch(List actions, Object[] > results){_}. > Phoenix performs atomic mutation for Put using _preBatchMutate()_ endpoint. > Hence, returning Result object with ResultOrException is important for the > purpose of returning the result back to the client as part of the atomic > operation. Even if Phoenix returns the OperationStatus (with Result) to > MiniBatchOperationInProgress, since HBase uses the empty Result for the > Success case, the client would not be able to get the expected result. > {code:java} > case SUCCESS: > builder.addResultOrException( > getResultOrException(ClientProtos.Result.getDefaultInstance(), index)); > break; {code} > If OperationStatus returned by _Region#batchMutate_ has valid Result object, > it should be used by RSRpcServices while returning the response. -- This message was sent by Atlassian Jira (v8.20.10#820010)