Re: [PR] HBASE-28218 Add a check for getQueueStorage().hasData() in the getDeletableFiles method of ReplicationLogCleaner [hbase]
hiping-tech commented on code in PR #5536: URL: https://github.com/apache/hbase/pull/5536#discussion_r1402999477 ## hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationLogCleaner.java: ## @@ -192,6 +192,14 @@ public Iterable getDeletableFiles(Iterable files) { if (this.getConf() == null) { return files; } +try { + if (!rpm.getQueueStorage().hasData()) { +return files; + } +} catch (ReplicationException e) { + LOG.error("Error occurred while executing queueStorage.hasData()", e); + return files; Review Comment: Thank you very much for your suggestion. It has been modified -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28218 Add a check for getQueueStorage().hasData() in the getDeletableFiles method of ReplicationLogCleaner [hbase]
Apache-HBase commented on PR #5536: URL: https://github.com/apache/hbase/pull/5536#issuecomment-1823893834 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 7s | master passed | | +1 :green_heart: | compile | 2m 33s | master passed | | +1 :green_heart: | checkstyle | 0m 34s | master passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 31s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 52s | the patch passed | | +1 :green_heart: | compile | 2m 33s | the patch passed | | +1 :green_heart: | javac | 2m 33s | the patch passed | | +1 :green_heart: | checkstyle | 0m 34s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 10m 52s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | spotless | 0m 42s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 39s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 34m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5536/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5536 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 78c5d31369d0 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5536/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28218 Add a check for getQueueStorage().hasData() in the getDeletableFiles method of ReplicationLogCleaner [hbase]
Apache9 commented on code in PR #5536: URL: https://github.com/apache/hbase/pull/5536#discussion_r1402971774 ## hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationLogCleaner.java: ## @@ -192,6 +192,14 @@ public Iterable getDeletableFiles(Iterable files) { if (this.getConf() == null) { return files; } +try { + if (!rpm.getQueueStorage().hasData()) { +return files; + } +} catch (ReplicationException e) { + LOG.error("Error occurred while executing queueStorage.hasData()", e); + return files; Review Comment: Should return empty here for safety? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28210) There could be holes in stack ids when loading procedures
[ https://issues.apache.org/jira/browse/HBASE-28210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788992#comment-17788992 ] Hudson commented on HBASE-28210: Results for branch branch-2 [build #930 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/930/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/930/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/930/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/930/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/930/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > There could be holes in stack ids when loading procedures > - > > Key: HBASE-28210 > URL: https://issues.apache.org/jira/browse/HBASE-28210 > Project: HBase > Issue Type: Bug > Components: master, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > Found this when implementing HBASE-28199, as after HBASE-28199 we will > suspend procedures a lot, so a missed scenario has been covered and it will > fail some UTs with corrupted procedures when loading. > I think this issue should be fixed separately as it affects all active > branches. > Let me try to implement a UT first. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28218) oldwal files cannot be cleaned up.
[ https://issues.apache.org/jira/browse/HBASE-28218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiping lv updated HBASE-28218: --- Component/s: wal > oldwal files cannot be cleaned up. > -- > > Key: HBASE-28218 > URL: https://issues.apache.org/jira/browse/HBASE-28218 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Haiping lv >Assignee: Haiping lv >Priority: Major > > There are a large number of oldwal files in the oldwal directory, and the > default value of hbase.master.logcleaner.ttl is 10 minutes. Upon observation, > it has been noticed that no oldwal files have been cleared. > Through analyzing the source code, it has been discovered that the following > logic contains a problem. When LogCleaner executes the checkAndDeleteFiles > method, it calls the getDeletableFiles method of ReplicationLogCleaner. One > of the logics in this method is that if canFilter is false, it will directly > return Collections.emptyList(), which means that the wal data filtered out by > TimeToLiveLogCleaner will be filtered out. This in turn leads to the files > under oldwal not being able to be cleared. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] Add a check for getQueueStorage().hasData() in the getDeletableFiles method of ReplicationLogCleaner [hbase]
hiping-tech opened a new pull request, #5536: URL: https://github.com/apache/hbase/pull/5536 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28210) There could be holes in stack ids when loading procedures
[ https://issues.apache.org/jira/browse/HBASE-28210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788981#comment-17788981 ] Hudson commented on HBASE-28210: Results for branch branch-2.4 [build #655 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/655/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/655/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/655/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/655/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/655/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > There could be holes in stack ids when loading procedures > - > > Key: HBASE-28210 > URL: https://issues.apache.org/jira/browse/HBASE-28210 > Project: HBase > Issue Type: Bug > Components: master, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > Found this when implementing HBASE-28199, as after HBASE-28199 we will > suspend procedures a lot, so a missed scenario has been covered and it will > fail some UTs with corrupted procedures when loading. > I think this issue should be fixed separately as it affects all active > branches. > Let me try to implement a UT first. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache9 commented on code in PR #5534: URL: https://github.com/apache/hbase/pull/5534#discussion_r1402891739 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ReopenTableRegionsProcedure.java: ## @@ -61,20 +69,36 @@ public class ReopenTableRegionsProcedure private List regions = Collections.emptyList(); + private List currentRegionBatch = Collections.emptyList(); + private RetryCounter retryCounter; + private final long reopenBatchBackoffMillis; Review Comment: Procedure should not have final fields which have to be initialized by input. As when loading the procedure from procedure store, we can only initialize them when calling deserialize method... ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ReopenTableRegionsProcedure.java: ## @@ -139,33 +170,57 @@ protected Flow executeFromState(MasterProcedureEnv env, ReopenTableRegionsState case REOPEN_TABLE_REGIONS_CONFIRM_REOPENED: regions = regions.stream().map(env.getAssignmentManager().getRegionStates()::checkReopened) .filter(l -> l != null).collect(Collectors.toList()); -if (regions.isEmpty()) { - return Flow.NO_MORE_STATE; +// we need to create a set of region names because the HRegionLocation hashcode is only +// based +// on the server name +Set currentRegionBatchNames = currentRegionBatch.stream() + .map(r -> r.getRegion().getRegionName()).collect(Collectors.toSet()); +currentRegionBatch = regions.stream() + .filter(r -> currentRegionBatchNames.contains(r.getRegion().getRegionName())) + .collect(Collectors.toList()); +if (currentRegionBatch.isEmpty()) { + if (regions.isEmpty()) { +return Flow.NO_MORE_STATE; + } else { + setNextState(ReopenTableRegionsState.REOPEN_TABLE_REGIONS_REOPEN_REGIONS); +if (reopenBatchBackoffMillis > 0) { + backoff(reopenBatchBackoffMillis); +} +return Flow.HAS_MORE_STATE; + } } -if (regions.stream().anyMatch(loc -> canSchedule(env, loc))) { +if (currentRegionBatch.stream().anyMatch(loc -> canSchedule(env, loc))) { retryCounter = null; setNextState(ReopenTableRegionsState.REOPEN_TABLE_REGIONS_REOPEN_REGIONS); + if (reopenBatchBackoffMillis > 0) { +backoff(reopenBatchBackoffMillis); + } return Flow.HAS_MORE_STATE; } // We can not schedule TRSP for all the regions need to reopen, wait for a while and retry // again. if (retryCounter == null) { retryCounter = ProcedureUtil.createRetryCounter(env.getMasterConfiguration()); } -long backoff = retryCounter.getBackoffTimeAndIncrementAttempts(); +long backoffMillis = retryCounter.getBackoffTimeAndIncrementAttempts(); LOG.info( - "There are still {} region(s) which need to be reopened for table {} are in " + "There are still {} region(s) which need to be reopened for table {}. {} are in " + "OPENING state, suspend {}secs and try again later", - regions.size(), tableName, backoff / 1000); -setTimeout(Math.toIntExact(backoff)); -setState(ProcedureProtos.ProcedureState.WAITING_TIMEOUT); -skipPersistence(); + regions.size(), tableName, currentRegionBatch.size(), backoffMillis / 1000); +backoff(backoffMillis); throw new ProcedureSuspendedException(); default: throw new UnsupportedOperationException("unhandled state=" + state); } } + private void backoff(long millis) throws ProcedureSuspendedException { +setTimeout(Math.toIntExact(millis)); +setState(ProcedureProtos.ProcedureState.WAITING_TIMEOUT); +skipPersistence(); Review Comment: I think it is OK to skip persistence here as we do not persist the reopen of the region in procedure state, IIRC we will use the openSeqNum to determine whether the region has already been reopened. But the problem here is that, the old logic is for error retrying, where we can not schedule TRSP for some regions, but here, we are doing throttling, so I do not think we should keep increase the retry number and increase the retry interval while scheduling TRSPs... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823753257 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 18s | master passed | | +1 :green_heart: | compile | 0m 39s | master passed | | +1 :green_heart: | shadedjars | 4m 42s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 19s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | shadedjars | 4m 43s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 234m 1s | hbase-server in the patch failed. | | | | 254m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 294530f94d14 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/testReport/ | | Max. process+thread count | 4490 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823752911 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 59s | master passed | | +1 :green_heart: | compile | 0m 48s | master passed | | +1 :green_heart: | shadedjars | 4m 58s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 47s | the patch passed | | +1 :green_heart: | compile | 0m 50s | the patch passed | | +1 :green_heart: | javac | 0m 50s | the patch passed | | +1 :green_heart: | shadedjars | 5m 0s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 231m 27s | hbase-server in the patch passed. | | | | 254m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 55123bf1d3fa 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/testReport/ | | Max. process+thread count | 4969 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-28218) oldwal files cannot be cleaned up.
[ https://issues.apache.org/jira/browse/HBASE-28218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiping lv updated HBASE-28218: --- Description: There are a large number of oldwal files in the oldwal directory, and the default value of hbase.master.logcleaner.ttl is 10 minutes. Upon observation, it has been noticed that no oldwal files have been cleared. Through analyzing the source code, it has been discovered that the following logic contains a problem. When LogCleaner executes the checkAndDeleteFiles method, it calls the getDeletableFiles method of ReplicationLogCleaner. One of the logics in this method is that if canFilter is false, it will directly return Collections.emptyList(), which means that the wal data filtered out by TimeToLiveLogCleaner will be filtered out. This in turn leads to the files under oldwal not being able to be cleared. > oldwal files cannot be cleaned up. > -- > > Key: HBASE-28218 > URL: https://issues.apache.org/jira/browse/HBASE-28218 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Haiping lv >Assignee: Haiping lv >Priority: Major > > There are a large number of oldwal files in the oldwal directory, and the > default value of hbase.master.logcleaner.ttl is 10 minutes. Upon observation, > it has been noticed that no oldwal files have been cleared. > Through analyzing the source code, it has been discovered that the following > logic contains a problem. When LogCleaner executes the checkAndDeleteFiles > method, it calls the getDeletableFiles method of ReplicationLogCleaner. One > of the logics in this method is that if canFilter is false, it will directly > return Collections.emptyList(), which means that the wal data filtered out by > TimeToLiveLogCleaner will be filtered out. This in turn leads to the files > under oldwal not being able to be cleared. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28218) oldwal files cannot be cleaned up.
[ https://issues.apache.org/jira/browse/HBASE-28218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haiping lv reassigned HBASE-28218: -- Assignee: Haiping lv > oldwal files cannot be cleaned up. > -- > > Key: HBASE-28218 > URL: https://issues.apache.org/jira/browse/HBASE-28218 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Haiping lv >Assignee: Haiping lv >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28218) oldwal files cannot be cleaned up.
Haiping lv created HBASE-28218: -- Summary: oldwal files cannot be cleaned up. Key: HBASE-28218 URL: https://issues.apache.org/jira/browse/HBASE-28218 Project: HBase Issue Type: Bug Affects Versions: 3.0.0 Reporter: Haiping lv -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
bbeaudreault commented on code in PR #5534: URL: https://github.com/apache/hbase/pull/5534#discussion_r1402830447 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ReopenTableRegionsProcedure.java: ## @@ -139,33 +170,57 @@ protected Flow executeFromState(MasterProcedureEnv env, ReopenTableRegionsState case REOPEN_TABLE_REGIONS_CONFIRM_REOPENED: regions = regions.stream().map(env.getAssignmentManager().getRegionStates()::checkReopened) .filter(l -> l != null).collect(Collectors.toList()); -if (regions.isEmpty()) { - return Flow.NO_MORE_STATE; +// we need to create a set of region names because the HRegionLocation hashcode is only +// based +// on the server name +Set currentRegionBatchNames = currentRegionBatch.stream() + .map(r -> r.getRegion().getRegionName()).collect(Collectors.toSet()); +currentRegionBatch = regions.stream() + .filter(r -> currentRegionBatchNames.contains(r.getRegion().getRegionName())) + .collect(Collectors.toList()); +if (currentRegionBatch.isEmpty()) { + if (regions.isEmpty()) { +return Flow.NO_MORE_STATE; + } else { + setNextState(ReopenTableRegionsState.REOPEN_TABLE_REGIONS_REOPEN_REGIONS); +if (reopenBatchBackoffMillis > 0) { + backoff(reopenBatchBackoffMillis); +} +return Flow.HAS_MORE_STATE; + } } -if (regions.stream().anyMatch(loc -> canSchedule(env, loc))) { +if (currentRegionBatch.stream().anyMatch(loc -> canSchedule(env, loc))) { retryCounter = null; setNextState(ReopenTableRegionsState.REOPEN_TABLE_REGIONS_REOPEN_REGIONS); + if (reopenBatchBackoffMillis > 0) { +backoff(reopenBatchBackoffMillis); + } return Flow.HAS_MORE_STATE; } // We can not schedule TRSP for all the regions need to reopen, wait for a while and retry // again. if (retryCounter == null) { retryCounter = ProcedureUtil.createRetryCounter(env.getMasterConfiguration()); } -long backoff = retryCounter.getBackoffTimeAndIncrementAttempts(); +long backoffMillis = retryCounter.getBackoffTimeAndIncrementAttempts(); LOG.info( - "There are still {} region(s) which need to be reopened for table {} are in " + "There are still {} region(s) which need to be reopened for table {}. {} are in " + "OPENING state, suspend {}secs and try again later", - regions.size(), tableName, backoff / 1000); -setTimeout(Math.toIntExact(backoff)); -setState(ProcedureProtos.ProcedureState.WAITING_TIMEOUT); -skipPersistence(); + regions.size(), tableName, currentRegionBatch.size(), backoffMillis / 1000); +backoff(backoffMillis); throw new ProcedureSuspendedException(); default: throw new UnsupportedOperationException("unhandled state=" + state); } } + private void backoff(long millis) throws ProcedureSuspendedException { +setTimeout(Math.toIntExact(millis)); +setState(ProcedureProtos.ProcedureState.WAITING_TIMEOUT); +skipPersistence(); Review Comment: This might need more research. I know other examples skip persistence, besides they make no change. But in this case maybe we need persistence so that we save an updated list of regions still needing to reopen -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-27795: Define RPC API for cache cleaning [hbase]
Apache-HBase commented on PR #5492: URL: https://github.com/apache/hbase/pull/5492#issuecomment-1823630325 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 44s | master passed | | +1 :green_heart: | compile | 3m 7s | master passed | | +1 :green_heart: | shadedjars | 7m 16s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 33s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 27s | the patch passed | | +1 :green_heart: | compile | 3m 13s | the patch passed | | +1 :green_heart: | javac | 3m 13s | the patch passed | | +1 :green_heart: | shadedjars | 7m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 46s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 57s | hbase-client in the patch passed. | | -1 :x: | unit | 266m 11s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 7m 2s | hbase-thrift in the patch passed. | | | | 313m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5492 | | JIRA Issue | HBASE-27795 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7c935a735d1f 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/testReport/ | | Max. process+thread count | 4668 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-27769) Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for non-HDFS file system
[ https://issues.apache.org/jira/browse/HBASE-27769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-27769. - Resolution: Fixed Pushed to branch HBASE-27740. Thanks [~taklwu] and [~wchevreuil]! > Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for > non-HDFS file system > - > > Key: HBASE-27769 > URL: https://issues.apache.org/jira/browse/HBASE-27769 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-3, 2.4.16, 2.5.3 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Fix For: HBASE-27740 > > > after HADOOP-18671 , we will change the hbase-asyncfs to use use > hasPathCapability to support recoverLease, setSafeMode, isFileClosed for > non-HDFS file system instead of directly casting only HDFS in > RecoverLeaseFSUtils -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27769) Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for non-HDFS file system
[ https://issues.apache.org/jira/browse/HBASE-27769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-27769: Fix Version/s: HBASE-27740 > Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for > non-HDFS file system > - > > Key: HBASE-27769 > URL: https://issues.apache.org/jira/browse/HBASE-27769 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-3, 2.4.16, 2.5.3 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Fix For: HBASE-27740 > > > after HADOOP-18671 , we will change the hbase-asyncfs to use use > hasPathCapability to support recoverLease, setSafeMode, isFileClosed for > non-HDFS file system instead of directly casting only HDFS in > RecoverLeaseFSUtils -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-27769 use LeaseRecoverable and SafeMode introduced in hadoop-co… [hbase]
jojochuang merged PR #5469: URL: https://github.com/apache/hbase/pull/5469 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823599677 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 23s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 41s | master passed | | +1 :green_heart: | compile | 2m 23s | master passed | | +1 :green_heart: | checkstyle | 0m 33s | master passed | | +1 :green_heart: | spotless | 0m 40s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 23s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 34s | the patch passed | | +1 :green_heart: | compile | 2m 22s | the patch passed | | +1 :green_heart: | javac | 2m 22s | the patch passed | | +1 :green_heart: | checkstyle | 0m 34s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 9m 35s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | spotless | 0m 39s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 31s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 8s | The patch does not generate ASF License warnings. | | | | 31m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 43119977ac61 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/3/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-27795: Define RPC API for cache cleaning [hbase]
Apache-HBase commented on PR #5492: URL: https://github.com/apache/hbase/pull/5492#issuecomment-1823596945 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 2s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 56s | master passed | | +1 :green_heart: | compile | 1m 58s | master passed | | +1 :green_heart: | shadedjars | 5m 19s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 40s | the patch passed | | +1 :green_heart: | compile | 1m 58s | the patch passed | | +1 :green_heart: | javac | 1m 58s | the patch passed | | +1 :green_heart: | shadedjars | 5m 17s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 34s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 219m 19s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 6m 43s | hbase-thrift in the patch passed. | | | | 257m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5492 | | JIRA Issue | HBASE-27795 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2e9bd0be359a 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/testReport/ | | Max. process+thread count | 4713 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823575566 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 32s | master passed | | +1 :green_heart: | compile | 1m 7s | master passed | | +1 :green_heart: | shadedjars | 7m 43s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 1m 7s | the patch passed | | +1 :green_heart: | javac | 1m 7s | the patch passed | | +1 :green_heart: | shadedjars | 6m 59s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 34s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 319m 0s | hbase-server in the patch failed. | | | | 350m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 43de6d9659e6 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/testReport/ | | Max. process+thread count | 4386 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28217 PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE [hbase]
Apache-HBase commented on PR #5535: URL: https://github.com/apache/hbase/pull/5535#issuecomment-1823573700 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 26s | master passed | | +1 :green_heart: | compile | 0m 53s | master passed | | +1 :green_heart: | shadedjars | 5m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 37s | the patch passed | | +1 :green_heart: | compile | 0m 44s | the patch passed | | +1 :green_heart: | javac | 0m 44s | the patch passed | | +1 :green_heart: | shadedjars | 5m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 233m 23s | hbase-server in the patch passed. | | | | 256m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5535 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3a16726896f3 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/testReport/ | | Max. process+thread count | 4847 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1823568590 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 34s | master passed | | +1 :green_heart: | compile | 0m 42s | master passed | | +1 :green_heart: | shadedjars | 4m 48s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 21s | the patch passed | | +1 :green_heart: | compile | 0m 39s | the patch passed | | +1 :green_heart: | javac | 0m 39s | the patch passed | | +1 :green_heart: | shadedjars | 5m 30s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 34s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 333m 35s | hbase-server in the patch failed. | | | | 356m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 007399482ad2 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/testReport/ | | Max. process+thread count | 4377 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28217 PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE [hbase]
Apache-HBase commented on PR #5535: URL: https://github.com/apache/hbase/pull/5535#issuecomment-1823565911 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 24s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 39s | master passed | | +1 :green_heart: | compile | 0m 37s | master passed | | +1 :green_heart: | shadedjars | 5m 14s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 18s | the patch passed | | +1 :green_heart: | compile | 0m 36s | the patch passed | | +1 :green_heart: | javac | 0m 36s | the patch passed | | +1 :green_heart: | shadedjars | 5m 11s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 225m 2s | hbase-server in the patch passed. | | | | 246m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5535 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3b6562c18ccc 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/testReport/ | | Max. process+thread count | 4531 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1823495173 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 18s | master passed | | +1 :green_heart: | compile | 0m 54s | master passed | | +1 :green_heart: | shadedjars | 5m 26s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 28s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 3s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | shadedjars | 5m 39s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 263m 40s | hbase-server in the patch failed. | | | | 288m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 814d075fd5a8 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/testReport/ | | Max. process+thread count | 4730 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
rmdmattingly commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823489765 I believe that test failure `TestPrefetchPersistence.testPrefetchPersistence:142->closeStoreFile:148` is unrelated to this changeset -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823483032 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 34s | master passed | | +1 :green_heart: | compile | 0m 39s | master passed | | +1 :green_heart: | shadedjars | 4m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 20s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | shadedjars | 4m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 239m 7s | hbase-server in the patch failed. | | | | 260m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 027afe8220d2 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/testReport/ | | Max. process+thread count | 4455 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28210) There could be holes in stack ids when loading procedures
[ https://issues.apache.org/jira/browse/HBASE-28210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1777#comment-1777 ] Hudson commented on HBASE-28210: Results for branch master [build #950 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > There could be holes in stack ids when loading procedures > - > > Key: HBASE-28210 > URL: https://issues.apache.org/jira/browse/HBASE-28210 > Project: HBase > Issue Type: Bug > Components: master, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > Found this when implementing HBASE-28199, as after HBASE-28199 we will > suspend procedures a lot, so a missed scenario has been covered and it will > fail some UTs with corrupted procedures when loading. > I think this issue should be fixed separately as it affects all active > branches. > Let me try to implement a UT first. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27999) Implement cache aware load balancer
[ https://issues.apache.org/jira/browse/HBASE-27999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1776#comment-1776 ] Hudson commented on HBASE-27999: Results for branch master [build #950 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/950/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Implement cache aware load balancer > --- > > Key: HBASE-27999 > URL: https://issues.apache.org/jira/browse/HBASE-27999 > Project: HBase > Issue Type: Sub-task > Components: Balancer >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 3.0.0-beta-1, 4.0.0-alpha-1 > > > HBase uses ephemeral cache to cache the blocks by reading them from the slow > storages and storing them to the bucket cache. This cache is warmed up > everytime a region server is started. Depending on the data size and the > configured cache size, the cache warm up can take anywhere between a few > minutes to few hours. Doing this everytime the region server starts can be a > very expensive process. To eliminate this, HBASE-27313 implemented the cache > persistence feature where the region servers periodically persist the blocks > cached in the bucket cache. This persisted information is then used to > resurrect the cache in the event of a region server restart because of normal > restart or crash. > This feature aims at enhancing this capability of HBase to enable the > balancer implementation considers the cache allocation of each region on > region servers when calculating a new assignment plan and uses the > region/region server cache allocation info reported by region servers which > takes into account to calculate the percentage of HFiles cached for each > region on the hosting server, and then use that as another factor when > deciding on an optimal, new assignment plan. > > A design document describing the balancer can be found at > https://docs.google.com/document/d/1A8-eVeRhZjwL0hzFw9wmXl8cGP4BFomSlohX2QcaFg4/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-27795: Define RPC API for cache cleaning [hbase]
Apache-HBase commented on PR #5492: URL: https://github.com/apache/hbase/pull/5492#issuecomment-1823361795 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 41s | master passed | | +1 :green_heart: | compile | 5m 9s | master passed | | +1 :green_heart: | checkstyle | 1m 31s | master passed | | +1 :green_heart: | spotless | 0m 46s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 6m 1s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 40s | the patch passed | | +1 :green_heart: | compile | 4m 17s | the patch passed | | +1 :green_heart: | cc | 4m 17s | the patch passed | | +1 :green_heart: | javac | 4m 17s | the patch passed | | -0 :warning: | checkstyle | 0m 32s | hbase-server: The patch generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 9m 58s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | hbaseprotoc | 1m 30s | the patch passed | | +1 :green_heart: | spotless | 0m 39s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 5m 43s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 51m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5492 | | JIRA Issue | HBASE-27795 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile cc hbaseprotoc prototool | | uname | Linux a7d76798035f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 81 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5492/5/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28210) There could be holes in stack ids when loading procedures
[ https://issues.apache.org/jira/browse/HBASE-28210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788861#comment-17788861 ] Hudson commented on HBASE-28210: Results for branch branch-3 [build #89 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/89/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/89/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/89/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > There could be holes in stack ids when loading procedures > - > > Key: HBASE-28210 > URL: https://issues.apache.org/jira/browse/HBASE-28210 > Project: HBase > Issue Type: Bug > Components: master, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > Found this when implementing HBASE-28199, as after HBASE-28199 we will > suspend procedures a lot, so a missed scenario has been covered and it will > fail some UTs with corrupted procedures when loading. > I think this issue should be fixed separately as it affects all active > branches. > Let me try to implement a UT first. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28210) There could be holes in stack ids when loading procedures
[ https://issues.apache.org/jira/browse/HBASE-28210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788859#comment-17788859 ] Hudson commented on HBASE-28210: Results for branch branch-2.5 [build #439 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/439/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/439/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/439/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/439/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/439/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > There could be holes in stack ids when loading procedures > - > > Key: HBASE-28210 > URL: https://issues.apache.org/jira/browse/HBASE-28210 > Project: HBase > Issue Type: Bug > Components: master, proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > Found this when implementing HBASE-28199, as after HBASE-28199 we will > suspend procedures a lot, so a missed scenario has been covered and it will > fail some UTs with corrupted procedures when loading. > I think this issue should be fixed separately as it affects all active > branches. > Let me try to implement a UT first. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28216) HDFS erasure coding support for table data dirs
[ https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788857#comment-17788857 ] Wei-Chiu Chuang edited comment on HBASE-28216 at 11/22/23 6:35 PM: --- No that's fine. We're pursuing that in a separate branch HBASE-27740. (reminder to myself: finish review HBASE-27769 today) I was under the impression that setErasureCodingPolicy requires HDFS admin user privilege. But checking the code again looks like it requires just the write privilege of that directory. We recently added EC support in Apache Impala (check out Cloudera doc https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html the doc talks about Ozone EC but it works the same way for HDFS EC) IMPALA-11476 but we did not add the support for Impala to update table EC properties. It would be interesting to start thinking about giving applications more control over EC policies. was (Author: jojochuang): No that's fine. We're pursuing that in a separate branch HBASE-27740. (reminder to myself: finish review HBASE-27769 today) I was under the impression that setErasureCodingPolicy requires HDFS admin user privilege. But checking the code again looks like it requires just the write privilege of that directory. We recently added EC support in Apache Impala (check out Cloudera doc https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html the doc talks about Ozone EC but it works the same way for HDFS EC) but we did not add the support for Impala to update table EC properties. > HDFS erasure coding support for table data dirs > --- > > Key: HBASE-28216 > URL: https://issues.apache.org/jira/browse/HBASE-28216 > Project: HBase > Issue Type: New Feature >Reporter: Bryan Beaudreault >Priority: Major > > [Erasure > coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html] > (EC) is a hadoop-3 feature which can drastically reduce storage > requirements, at the expense of locality. At my company we have a few hbase > clusters which are extremely data dense and take mostly write traffic, fewer > reads (cold data). We'd like to reduce the cost of these clusters, and EC is > a great way to do that since it can reduce replication related storage costs > by 50%. > It's possible to enable EC policies on sub directories of HDFS. One can > manually set this with {{{}hdfs ec -setPolicy -path > /hbase/data/default/usertable -policy {}}}. This can work without any > hbase support. > One problem with that is a lack of visibility by operators into which tables > might have EC enabled. I think this is where HBase can help. Here's my > proposal: > * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY > * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, > verify that the requested policy is available and enabled via > DistributedFileSystem. > getErasureCodingPolicies(). > * During ModifyTableProcedure, add a new state for > MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY. > ** When adding or changing a policy, use DistributedFileSystem. > setErasureCodingPolicy to sync it for the data and archive dir of that table > (or column in table) > ** When removing the property or setting it to empty, use > DistributedFileSystem. > unsetErasureCodingPolicy to remove it from the data and archive dir. > Since this new API is in hadoop-3 only, we'll need to add a reflection > wrapper class for managing the calls and verifying that the API is available. > We'll similarly do that API check in preflightChecks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28217 PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE [hbase]
Apache-HBase commented on PR #5535: URL: https://github.com/apache/hbase/pull/5535#issuecomment-1823273522 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 57s | master passed | | +1 :green_heart: | compile | 2m 29s | master passed | | +1 :green_heart: | checkstyle | 0m 37s | master passed | | +1 :green_heart: | spotless | 0m 43s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 31s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 42s | the patch passed | | +1 :green_heart: | compile | 2m 27s | the patch passed | | +1 :green_heart: | javac | 2m 27s | the patch passed | | -0 :warning: | checkstyle | 0m 35s | hbase-server: The patch generated 7 new + 17 unchanged - 0 fixed = 24 total (was 17) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 9m 24s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | spotless | 0m 41s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 32m 23s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5535 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 90128d39fdb4 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 78 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5535/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs
[ https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788857#comment-17788857 ] Wei-Chiu Chuang commented on HBASE-28216: - No that's fine. We're pursuing that in a separate branch HBASE-27740. (reminder to myself: finish review HBASE-27769 today) I was under the impression that setErasureCodingPolicy requires HDFS admin user privilege. But checking the code again looks like it requires just the write privilege of that directory. We recently added EC support in Apache Impala (check out Cloudera doc https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html the doc talks about Ozone EC but it works the same way for HDFS EC) but we did not add the support for Impala to update table EC properties. > HDFS erasure coding support for table data dirs > --- > > Key: HBASE-28216 > URL: https://issues.apache.org/jira/browse/HBASE-28216 > Project: HBase > Issue Type: New Feature >Reporter: Bryan Beaudreault >Priority: Major > > [Erasure > coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html] > (EC) is a hadoop-3 feature which can drastically reduce storage > requirements, at the expense of locality. At my company we have a few hbase > clusters which are extremely data dense and take mostly write traffic, fewer > reads (cold data). We'd like to reduce the cost of these clusters, and EC is > a great way to do that since it can reduce replication related storage costs > by 50%. > It's possible to enable EC policies on sub directories of HDFS. One can > manually set this with {{{}hdfs ec -setPolicy -path > /hbase/data/default/usertable -policy {}}}. This can work without any > hbase support. > One problem with that is a lack of visibility by operators into which tables > might have EC enabled. I think this is where HBase can help. Here's my > proposal: > * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY > * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, > verify that the requested policy is available and enabled via > DistributedFileSystem. > getErasureCodingPolicies(). > * During ModifyTableProcedure, add a new state for > MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY. > ** When adding or changing a policy, use DistributedFileSystem. > setErasureCodingPolicy to sync it for the data and archive dir of that table > (or column in table) > ** When removing the property or setting it to empty, use > DistributedFileSystem. > unsetErasureCodingPolicy to remove it from the data and archive dir. > Since this new API is in hadoop-3 only, we'll need to add a reflection > wrapper class for managing the calls and verifying that the API is available. > We'll similarly do that API check in preflightChecks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs
[ https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788856#comment-17788856 ] Bryan Beaudreault commented on HBASE-28216: --- Thanks for the input! Do you think that is a blocker for this work, or could we clean it up once the Hadoop Common API is available? {quote}It does not migrate existing files in the directory automatically {quote} Yea the good thing here is this is already how all other storefile related settings work for hbase – changes to BLOOMFILTER, BLOCKSIZE, COMPRESSION, etc all require the user to follow-up with a major compaction. So I think it's ok to follow the same protocol for this new ERASURE_CODING_POLICY. > HDFS erasure coding support for table data dirs > --- > > Key: HBASE-28216 > URL: https://issues.apache.org/jira/browse/HBASE-28216 > Project: HBase > Issue Type: New Feature >Reporter: Bryan Beaudreault >Priority: Major > > [Erasure > coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html] > (EC) is a hadoop-3 feature which can drastically reduce storage > requirements, at the expense of locality. At my company we have a few hbase > clusters which are extremely data dense and take mostly write traffic, fewer > reads (cold data). We'd like to reduce the cost of these clusters, and EC is > a great way to do that since it can reduce replication related storage costs > by 50%. > It's possible to enable EC policies on sub directories of HDFS. One can > manually set this with {{{}hdfs ec -setPolicy -path > /hbase/data/default/usertable -policy {}}}. This can work without any > hbase support. > One problem with that is a lack of visibility by operators into which tables > might have EC enabled. I think this is where HBase can help. Here's my > proposal: > * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY > * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, > verify that the requested policy is available and enabled via > DistributedFileSystem. > getErasureCodingPolicies(). > * During ModifyTableProcedure, add a new state for > MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY. > ** When adding or changing a policy, use DistributedFileSystem. > setErasureCodingPolicy to sync it for the data and archive dir of that table > (or column in table) > ** When removing the property or setting it to empty, use > DistributedFileSystem. > unsetErasureCodingPolicy to remove it from the data and archive dir. > Since this new API is in hadoop-3 only, we'll need to add a reflection > wrapper class for managing the calls and verifying that the API is available. > We'll similarly do that API check in preflightChecks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs
[ https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788853#comment-17788853 ] Wei-Chiu Chuang commented on HBASE-28216: - Make sense to me. Although [~taklwu] and I have been trying to reduce the reliance on DistributedFileSystem. I suspect we want to expose EC related APIs to Hadoop Common, eventually. Note: {code} hdfs ec -setPolicy -path /hbase/data/default/usertable -policy {code} The command affects new files in the directory. It does not migrate existing files in the directory automatically. (we were planning to support this but got stalled) > HDFS erasure coding support for table data dirs > --- > > Key: HBASE-28216 > URL: https://issues.apache.org/jira/browse/HBASE-28216 > Project: HBase > Issue Type: New Feature >Reporter: Bryan Beaudreault >Priority: Major > > [Erasure > coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html] > (EC) is a hadoop-3 feature which can drastically reduce storage > requirements, at the expense of locality. At my company we have a few hbase > clusters which are extremely data dense and take mostly write traffic, fewer > reads (cold data). We'd like to reduce the cost of these clusters, and EC is > a great way to do that since it can reduce replication related storage costs > by 50%. > It's possible to enable EC policies on sub directories of HDFS. One can > manually set this with {{{}hdfs ec -setPolicy -path > /hbase/data/default/usertable -policy {}}}. This can work without any > hbase support. > One problem with that is a lack of visibility by operators into which tables > might have EC enabled. I think this is where HBase can help. Here's my > proposal: > * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY > * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, > verify that the requested policy is available and enabled via > DistributedFileSystem. > getErasureCodingPolicies(). > * During ModifyTableProcedure, add a new state for > MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY. > ** When adding or changing a policy, use DistributedFileSystem. > setErasureCodingPolicy to sync it for the data and archive dir of that table > (or column in table) > ** When removing the property or setting it to empty, use > DistributedFileSystem. > unsetErasureCodingPolicy to remove it from the data and archive dir. > Since this new API is in hadoop-3 only, we'll need to add a reflection > wrapper class for managing the calls and verifying that the API is available. > We'll similarly do that API check in preflightChecks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-25549) Provide a switch that allows avoiding reopening all regions when modifying a table to prevent RIT storms.
[ https://issues.apache.org/jira/browse/HBASE-25549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788849#comment-17788849 ] Andrew Kyle Purtell commented on HBASE-25549: - For our use cases we are also planning to let organic activity lazily reopen regions. Implying that the planned changes are suitable for mixed operation and delayed application. I think those that would use this feature would have similar goals. Reopening of regions other than by organic activity is not desirable. > Provide a switch that allows avoiding reopening all regions when modifying a > table to prevent RIT storms. > - > > Key: HBASE-25549 > URL: https://issues.apache.org/jira/browse/HBASE-25549 > Project: HBase > Issue Type: Improvement > Components: master, shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zhuoyue Huang >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1, 2.5.7 > > > Under normal circumstances, modifying a table will cause all regions > belonging to the table to enter RIT. Imagine the following two scenarios: > # Someone entered the wrong configuration (e.g. negative > 'hbase.busy.wait.multiplier.max' value) when altering the table, causing > thousands of online regions to fail to open, leading to online accidents. > # Modify the configuration of a table, but this modification is not urgent, > the regions are not expected to enter RIT immediately. > -'alter_lazy' is a new command to modify a table without reopening any online > regions except those regions were assigned by other threads or split etc.- > > Provide an optional lazy_mode for the alter command to modify the > TableDescriptor without the region entering the RIT. The modification will > take effect when the region is reopened. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] HBASE-28217 PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE [hbase]
wchevreuil opened a new pull request, #5535: URL: https://github.com/apache/hbase/pull/5535 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28217) PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE
Wellington Chevreuil created HBASE-28217: Summary: PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE Key: HBASE-28217 URL: https://issues.apache.org/jira/browse/HBASE-28217 Project: HBase Issue Type: Bug Reporter: Wellington Chevreuil Assignee: Wellington Chevreuil HFilePReadReader relies on the return of CacheConfig.shouldPrefetchOnOpen return to decide if it should run the PrefetchExecutor for the files. Currently, CacheConfig.shouldPrefetchOnOpen returns true if "hbase.rs.prefetchblocksonopen" is set to true at the config, OR PREFETCH_BLOCKS_ON_OPEN is set to true at CF level. There's also the CacheConfig.shouldCacheDataOnRead, which returns true if both hbase.block.data.cacheonread is set to true at the config AND BLOCKCACHE is set to true at CF level. If BLOCKCACHE is set to false at CF level, HFilePReadReader will still run the PrefetchExecutor to read all the file's blocks from the FileSystem, but then would find out the given block shouldn't be cached. I believe we should change CacheConfig.shouldPrefetchOnOpen to return true only if CacheConfig.shouldCacheDataOnRead is also true. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1823151498 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 28s | master passed | | +1 :green_heart: | compile | 2m 53s | master passed | | +1 :green_heart: | checkstyle | 0m 42s | master passed | | +1 :green_heart: | spotless | 0m 45s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 27s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 42s | the patch passed | | +1 :green_heart: | compile | 2m 30s | the patch passed | | +1 :green_heart: | javac | 2m 30s | the patch passed | | +1 :green_heart: | checkstyle | 0m 33s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 9m 54s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | spotless | 0m 40s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 33m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux eff46ea00839 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/2/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1823140738 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 34s | master passed | | +1 :green_heart: | compile | 3m 1s | master passed | | +1 :green_heart: | checkstyle | 0m 45s | master passed | | +1 :green_heart: | spotless | 0m 55s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 56s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 36s | the patch passed | | +1 :green_heart: | compile | 3m 1s | the patch passed | | +1 :green_heart: | javac | 3m 1s | the patch passed | | +1 :green_heart: | checkstyle | 0m 57s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 7s | Patch does not cause any errors with Hadoop 3.2.4 3.3.6. | | +1 :green_heart: | spotless | 0m 52s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 2m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 42m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 6103e96c6cc1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 77 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/7/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25549) Provide a switch that allows avoiding reopening all regions when modifying a table to prevent RIT storms.
[ https://issues.apache.org/jira/browse/HBASE-25549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788828#comment-17788828 ] Bryan Beaudreault commented on HBASE-25549: --- I'm curious how you plan to reopen regions with this feature. I don't think we have a user API for reopening regions. You can unassign and then assign, but that leaves you at risk of handling errors and recovery in your automation. I wonder if https://issues.apache.org/jira/browse/HBASE-28215 is better for handling RIT storms, since the hmaster procedure framework already has error handling and state recovery built in, just not throttling (yet). I still like this feature for things like changing region normalizer target sizes and such, in which case I plan to never reopen the regions (until they reopen organically as regions move). > Provide a switch that allows avoiding reopening all regions when modifying a > table to prevent RIT storms. > - > > Key: HBASE-25549 > URL: https://issues.apache.org/jira/browse/HBASE-25549 > Project: HBase > Issue Type: Improvement > Components: master, shell >Affects Versions: 3.0.0-alpha-1 >Reporter: Zhuoyue Huang >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1, 2.5.7 > > > Under normal circumstances, modifying a table will cause all regions > belonging to the table to enter RIT. Imagine the following two scenarios: > # Someone entered the wrong configuration (e.g. negative > 'hbase.busy.wait.multiplier.max' value) when altering the table, causing > thousands of online regions to fail to open, leading to online accidents. > # Modify the configuration of a table, but this modification is not urgent, > the regions are not expected to enter RIT immediately. > -'alter_lazy' is a new command to modify a table without reopening any online > regions except those regions were assigned by other threads or split etc.- > > Provide an optional lazy_mode for the alter command to modify the > TableDescriptor without the region entering the RIT. The modification will > take effect when the region is reopened. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1822968123 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 27s | master passed | | +1 :green_heart: | compile | 0m 50s | master passed | | +1 :green_heart: | shadedjars | 6m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 29s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 1m 28s | root in the patch failed. | | -1 :x: | compile | 0m 17s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 17s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 3m 48s | patch has 10 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 0m 15s | hbase-server in the patch failed. | | | | 18m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a12bb45d8da1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/testReport/ | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1822962908 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 34s | master passed | | +1 :green_heart: | compile | 0m 42s | master passed | | +1 :green_heart: | shadedjars | 4m 49s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 1m 8s | root in the patch failed. | | -1 :x: | compile | 0m 15s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 15s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 3m 16s | patch has 10 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 0m 16s | hbase-server in the patch failed. | | | | 15m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 73701f566a4a 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Temurin-1.8.0_352-b08 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/testReport/ | | Max. process+thread count | 64 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
bbeaudreault commented on code in PR #5534: URL: https://github.com/apache/hbase/pull/5534#discussion_r1402218181 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ReopenTableRegionsProcedure.java: ## @@ -139,11 +165,29 @@ protected Flow executeFromState(MasterProcedureEnv env, ReopenTableRegionsState case REOPEN_TABLE_REGIONS_CONFIRM_REOPENED: regions = regions.stream().map(env.getAssignmentManager().getRegionStates()::checkReopened) .filter(l -> l != null).collect(Collectors.toList()); -if (regions.isEmpty()) { - return Flow.NO_MORE_STATE; +// we need to create a set of region names because the HRegionLocation hashcode is only based +// on the server name +Set currentRegionBatchNames = currentRegionBatch.stream() + .map(r -> r.getRegion().getRegionName()).collect(Collectors.toSet()); +currentRegionBatch = regions.stream() + .filter(r -> currentRegionBatchNames.contains(r.getRegion().getRegionName())) + .collect(Collectors.toList()); +if (currentRegionBatch.isEmpty()) { + if (regions.isEmpty()) { +return Flow.NO_MORE_STATE; + } else { +if (reopenBatchBackoffMillis > 0) { + Thread.sleep(reopenBatchBackoffMillis); Review Comment: I don't think we can sleep within a procedure. There are only a limited number of procedure executors, and this could hold up execution of other procedures. It's not super well documented, but I think the way to do this is to: ```java setTimeout(Math.toIntExact(backoff)); setState(ProcedureProtos.ProcedureState.WAITING_TIMEOUT); throw new ProcedureSuspendedException(); ``` Then override setTimeoutFailure: ```java @Override protected synchronized boolean setTimeoutFailure(MasterProcedureEnv env) { setState(ProcedureProtos.ProcedureState.RUNNABLE); env.getProcedureScheduler().addFront(this); return false; } ``` In fact, ReopenTableRegionsProcedure already has some code like this. So maybe you can integrate with it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
Apache-HBase commented on PR #5534: URL: https://github.com/apache/hbase/pull/5534#issuecomment-1822962791 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 54s | master passed | | +1 :green_heart: | compile | 2m 25s | master passed | | +1 :green_heart: | checkstyle | 0m 34s | master passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 24s | master passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 1m 15s | root in the patch failed. | | -1 :x: | compile | 0m 13s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 13s | hbase-server in the patch failed. | | -0 :warning: | checkstyle | 0m 32s | hbase-server: The patch generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | -1 :x: | hadoopcheck | 1m 18s | The patch causes 10 errors with Hadoop v3.2.4. | | -1 :x: | hadoopcheck | 2m 40s | The patch causes 10 errors with Hadoop v3.3.6. | | -1 :x: | spotless | 0m 32s | patch has 23 errors when running spotless:check, run spotless:apply to fix. | | -1 :x: | spotbugs | 0m 13s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 9s | The patch does not generate ASF License warnings. | | | | 15m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5534 | | JIRA Issue | HBASE-28215 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux d0b1a06ab9e8 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1203c2014b | | Default Java | Eclipse Adoptium-11.0.17+8 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-compile-hbase-server.txt | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | hadoopcheck | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-javac-3.2.4.txt | | hadoopcheck | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-javac-3.3.6.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-spotless.txt | | spotbugs | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/artifact/yetus-general-check/output/patch-spotbugs-hbase-server.txt | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5534/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28216) HDFS erasure coding support for table data dirs
Bryan Beaudreault created HBASE-28216: - Summary: HDFS erasure coding support for table data dirs Key: HBASE-28216 URL: https://issues.apache.org/jira/browse/HBASE-28216 Project: HBase Issue Type: New Feature Reporter: Bryan Beaudreault [Erasure coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html] (EC) is a hadoop-3 feature which can drastically reduce storage requirements, at the expense of locality. At my company we have a few hbase clusters which are extremely data dense and take mostly write traffic, fewer reads (cold data). We'd like to reduce the cost of these clusters, and EC is a great way to do that since it can reduce replication related storage costs by 50%. It's possible to enable EC policies on sub directories of HDFS. One can manually set this with {{{}hdfs ec -setPolicy -path /hbase/data/default/usertable -policy {}}}. This can work without any hbase support. One problem with that is a lack of visibility by operators into which tables might have EC enabled. I think this is where HBase can help. Here's my proposal: * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, verify that the requested policy is available and enabled via DistributedFileSystem. getErasureCodingPolicies(). * During ModifyTableProcedure, add a new state for MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY. ** When adding or changing a policy, use DistributedFileSystem. setErasureCodingPolicy to sync it for the data and archive dir of that table (or column in table) ** When removing the property or setting it to empty, use DistributedFileSystem. unsetErasureCodingPolicy to remove it from the data and archive dir. Since this new API is in hadoop-3 only, we'll need to add a reflection wrapper class for managing the calls and verifying that the API is available. We'll similarly do that API check in preflightChecks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822930537 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 2s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/6/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822931412 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 4s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/6/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822930810 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 3s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/6/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-28215) Region reopen procedure should support some sort of throttling
[ https://issues.apache.org/jira/browse/HBASE-28215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Mattingly reassigned HBASE-28215: - Assignee: Ray Mattingly > Region reopen procedure should support some sort of throttling > -- > > Key: HBASE-28215 > URL: https://issues.apache.org/jira/browse/HBASE-28215 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Reporter: Ray Mattingly >Assignee: Ray Mattingly >Priority: Major > > The mass reopening of regions caused by a table descriptor modification can > be quite disruptive. For latency/error sensitive workloads, like our user > facing traffic, we need to be very careful about when we modify table > descriptors, and it can be virtually impossible to do it painlessly for busy > tables. > It would be nice if we supported configurable batching/throttling of > reopenings so that the amplitude of any disruption can be kept relatively > small. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] HBASE-28215: region reopen procedure batching/throttling [hbase]
rmdmattingly opened a new pull request, #5534: URL: https://github.com/apache/hbase/pull/5534 https://issues.apache.org/jira/browse/HBASE-28215 The mass reopening of regions caused by a table descriptor modification can be quite disruptive. For latency/error sensitive workloads, like our user facing traffic, we need to be very careful about when we modify table descriptors, and it can be virtually impossible to do it painlessly for busy tables. This PR introduces two new configurations: 1. hbase.table.regions.reopen.batch.size * This is an integer which represents the number of reopen procedures to create in a single batch. This defaults to Integer.MAX_VALUE, so it should be a no-op for any table with fewer than billions of regions 3. hbase.table.regions.reopen.batch.backoff.ms * This is an integer which represents the millis to be waited between reopen batches. Hyperbolically, you could configure your batch size to 1 and your backoff MS to 60_000 — this would result in reopening no more than 1 region per 60s as a result of a single reopening procedure. @bbeaudreault @hgromer @eab148 @bozzkar -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-28215) Region reopen procedure should support some sort of throttling
[ https://issues.apache.org/jira/browse/HBASE-28215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28215: -- Component/s: master proc-v2 > Region reopen procedure should support some sort of throttling > -- > > Key: HBASE-28215 > URL: https://issues.apache.org/jira/browse/HBASE-28215 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Reporter: Ray Mattingly >Priority: Major > > The mass reopening of regions caused by a table descriptor modification can > be quite disruptive. For latency/error sensitive workloads, like our user > facing traffic, we need to be very careful about when we modify table > descriptors, and it can be virtually impossible to do it painlessly for busy > tables. > It would be nice if we supported configurable batching/throttling of > reopenings so that the amplitude of any disruption can be kept relatively > small. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache9 commented on code in PR #5530: URL: https://github.com/apache/hbase/pull/5530#discussion_r1402182196 ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java: ## @@ -642,12 +643,14 @@ void blockEvicted(BlockCacheKey cacheKey, BucketEntry bucketEntry, boolean decre blocksByHFile.remove(cacheKey); if (decrementBlockNumber) { this.blockNumber.decrement(); + if (ioEngine.isPersistent()) { +removeFileFromPrefetch(cacheKey.getHfileName()); + } } if (evictedByEvictionProcess) { cacheStats.evicted(bucketEntry.getCachedTime(), cacheKey.isPrimary()); Review Comment: What about this one? Why we need to move this up? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache9 commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822914756 OK, two methods... I expanded the code and got the point... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28215) Region reopen procedure should support some sort of throttling
Ray Mattingly created HBASE-28215: - Summary: Region reopen procedure should support some sort of throttling Key: HBASE-28215 URL: https://issues.apache.org/jira/browse/HBASE-28215 Project: HBase Issue Type: Improvement Reporter: Ray Mattingly The mass reopening of regions caused by a table descriptor modification can be quite disruptive. For latency/error sensitive workloads, like our user facing traffic, we need to be very careful about when we modify table descriptors, and it can be virtually impossible to do it painlessly for busy tables. It would be nice if we supported configurable batching/throttling of reopenings so that the amplitude of any disruption can be kept relatively small. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28186) Rebase CacheAwareBalance related commits into master branch
[ https://issues.apache.org/jira/browse/HBASE-28186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788800#comment-17788800 ] Wellington Chevreuil commented on HBASE-28186: -- [~ragarkar], I had now merged the commits into master and branch-3, but branch-2 is conflicting. Could you create branch-2 compatible PRs? > Rebase CacheAwareBalance related commits into master branch > --- > > Key: HBASE-28186 > URL: https://issues.apache.org/jira/browse/HBASE-28186 > Project: HBase > Issue Type: Sub-task >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27998) Enhance region metrics to include prefetch ratio for each region
[ https://issues.apache.org/jira/browse/HBASE-27998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27998: - Fix Version/s: 3.0.0-beta-1 > Enhance region metrics to include prefetch ratio for each region > > > Key: HBASE-27998 > URL: https://issues.apache.org/jira/browse/HBASE-27998 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 3.0.0-beta-1, 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27997) Enhance prefetch executor to record region prefetch information along with the list of hfiles prefetched
[ https://issues.apache.org/jira/browse/HBASE-27997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27997: - Fix Version/s: 3.0.0-beta-1 > Enhance prefetch executor to record region prefetch information along with > the list of hfiles prefetched > > > Key: HBASE-27997 > URL: https://issues.apache.org/jira/browse/HBASE-27997 > Project: HBase > Issue Type: Sub-task > Components: BucketCache >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 3.0.0-beta-1, 4.0.0-alpha-1 > > > HBASE-27313 implemented the prefetch persistence feature where it persists > the list of hFiles prefetched in the bucket cache. This information is used > to reconstruct the cache in the event of a server restart/crash. > Currently, only the list of hFiles is persisted. > However, for the new PrefetchAwareLoadBalancer (work in progress) to work, we > need the information about how much a region is prefetched on a region server. > This Jira introduces an additional map in the prefetch executor to maintain > the information about how much a region has been prefetched on that region > server. The size of region server prefetched is calculated as the total size > of all hFiles prefetched for that region. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27999) Implement cache aware load balancer
[ https://issues.apache.org/jira/browse/HBASE-27999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27999: - Fix Version/s: 3.0.0-beta-1 > Implement cache aware load balancer > --- > > Key: HBASE-27999 > URL: https://issues.apache.org/jira/browse/HBASE-27999 > Project: HBase > Issue Type: Sub-task > Components: Balancer >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 3.0.0-beta-1, 4.0.0-alpha-1 > > > HBase uses ephemeral cache to cache the blocks by reading them from the slow > storages and storing them to the bucket cache. This cache is warmed up > everytime a region server is started. Depending on the data size and the > configured cache size, the cache warm up can take anywhere between a few > minutes to few hours. Doing this everytime the region server starts can be a > very expensive process. To eliminate this, HBASE-27313 implemented the cache > persistence feature where the region servers periodically persist the blocks > cached in the bucket cache. This persisted information is then used to > resurrect the cache in the event of a region server restart because of normal > restart or crash. > This feature aims at enhancing this capability of HBase to enable the > balancer implementation considers the cache allocation of each region on > region servers when calculating a new assignment plan and uses the > region/region server cache allocation info reported by region servers which > takes into account to calculate the percentage of HFiles cached for each > region on the hosting server, and then use that as another factor when > deciding on an optimal, new assignment plan. > > A design document describing the balancer can be found at > https://docs.google.com/document/d/1A8-eVeRhZjwL0hzFw9wmXl8cGP4BFomSlohX2QcaFg4/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28120) Provide the switch to avoid reopening regions in the alter sync command
[ https://issues.apache.org/jira/browse/HBASE-28120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788796#comment-17788796 ] Gourab Taparia commented on HBASE-28120: [~bbeaudreault] Got it. Sounds good. I will then work on updating my PR, and close this jira, and raise the PR on the orignal Jira itself. > Provide the switch to avoid reopening regions in the alter sync command > --- > > Key: HBASE-28120 > URL: https://issues.apache.org/jira/browse/HBASE-28120 > Project: HBase > Issue Type: Sub-task > Components: master, shell >Affects Versions: 2.0.0-alpha-1 >Reporter: Gourab Taparia >Assignee: Gourab Taparia >Priority: Major > Fix For: 2.6.0 > > > As part of the sub-task, as HBase 2 supports both Async and Sync API, this > task is to add this support/feature to HBase 2's Sync API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28120) Provide the switch to avoid reopening regions in the alter sync command
[ https://issues.apache.org/jira/browse/HBASE-28120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788791#comment-17788791 ] Bryan Beaudreault commented on HBASE-28120: --- Yea the async/sync issue is a common one for any contributions to the client side of hbase. Usually we'll do it in the same Jira and create a backport PR which starts with cherry-picking the original commit, then adds a new commit which re-implements (as necessary) for the non-async clients. > Provide the switch to avoid reopening regions in the alter sync command > --- > > Key: HBASE-28120 > URL: https://issues.apache.org/jira/browse/HBASE-28120 > Project: HBase > Issue Type: Sub-task > Components: master, shell >Affects Versions: 2.0.0-alpha-1 >Reporter: Gourab Taparia >Assignee: Gourab Taparia >Priority: Major > Fix For: 2.6.0 > > > As part of the sub-task, as HBase 2 supports both Async and Sync API, this > task is to add this support/feature to HBase 2's Sync API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28120) Provide the switch to avoid reopening regions in the alter sync command
[ https://issues.apache.org/jira/browse/HBASE-28120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788788#comment-17788788 ] Gourab Taparia edited comment on HBASE-28120 at 11/22/23 2:18 PM: -- [~GeorryHuang] Yes, i will continue on this in couple of days. [~bbeaudreault] The reason I opened a sub-task Jira for better tracking because hbase 3 only has alter async support, and hbase 2 has support for both sync and async in alter, and the changes were not directly compatible. I am okay with with backport in the orignal Jira too. Since this Jira is a sub-task of HBASE-25549, maybe we can update the fixVersions in the parent Jira too, once this is resolved. I hope this makes sense as this jira, is a sub-task of the parent Jira - and if this is the followed standard. I am fine with either. was (Author: gourab.taparia): [~GeorryHuang] Yes, i will continue on this in couple of days. [~bbeaudreault] The reason I opened a sub-task Jira for better tracking because hbase 3 only has alter async support, and hbase 2 has support for both sync and async in alter, and the changes were not directly compatible. I am okay with with backport in the orignal Jira too. Since this Jira is a sub-task of HBASE-25549, maybe we can update the fixVersions in the parent Jira too, once this is resolved. I hope this makes sense as this jira, is a sub-task of the parent jira. I am fine with either. > Provide the switch to avoid reopening regions in the alter sync command > --- > > Key: HBASE-28120 > URL: https://issues.apache.org/jira/browse/HBASE-28120 > Project: HBase > Issue Type: Sub-task > Components: master, shell >Affects Versions: 2.0.0-alpha-1 >Reporter: Gourab Taparia >Assignee: Gourab Taparia >Priority: Major > Fix For: 2.6.0 > > > As part of the sub-task, as HBase 2 supports both Async and Sync API, this > task is to add this support/feature to HBase 2's Sync API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28120) Provide the switch to avoid reopening regions in the alter sync command
[ https://issues.apache.org/jira/browse/HBASE-28120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788788#comment-17788788 ] Gourab Taparia commented on HBASE-28120: [~GeorryHuang] Yes, i will continue on this in couple of days. [~bbeaudreault] The reason I opened a sub-task Jira for better tracking because hbase 3 only has alter async support, and hbase 2 has support for both sync and async in alter, and the changes were not directly compatible. I am okay with with backport in the orignal Jira too. Since this Jira is a sub-task of HBASE-25549, maybe we can update the fixVersions in the parent Jira too, once this is resolved. I hope this makes sense as this jira, is a sub-task of the parent jira. I am fine with either. > Provide the switch to avoid reopening regions in the alter sync command > --- > > Key: HBASE-28120 > URL: https://issues.apache.org/jira/browse/HBASE-28120 > Project: HBase > Issue Type: Sub-task > Components: master, shell >Affects Versions: 2.0.0-alpha-1 >Reporter: Gourab Taparia >Assignee: Gourab Taparia >Priority: Major > Fix For: 2.6.0 > > > As part of the sub-task, as HBase 2 supports both Async and Sync API, this > task is to add this support/feature to HBase 2's Sync API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27999) Implement cache aware load balancer
[ https://issues.apache.org/jira/browse/HBASE-27999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27999: - Fix Version/s: 4.0.0-alpha-1 > Implement cache aware load balancer > --- > > Key: HBASE-27999 > URL: https://issues.apache.org/jira/browse/HBASE-27999 > Project: HBase > Issue Type: Sub-task > Components: Balancer >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 4.0.0-alpha-1 > > > HBase uses ephemeral cache to cache the blocks by reading them from the slow > storages and storing them to the bucket cache. This cache is warmed up > everytime a region server is started. Depending on the data size and the > configured cache size, the cache warm up can take anywhere between a few > minutes to few hours. Doing this everytime the region server starts can be a > very expensive process. To eliminate this, HBASE-27313 implemented the cache > persistence feature where the region servers periodically persist the blocks > cached in the bucket cache. This persisted information is then used to > resurrect the cache in the event of a region server restart because of normal > restart or crash. > This feature aims at enhancing this capability of HBase to enable the > balancer implementation considers the cache allocation of each region on > region servers when calculating a new assignment plan and uses the > region/region server cache allocation info reported by region servers which > takes into account to calculate the percentage of HFiles cached for each > region on the hosting server, and then use that as another factor when > deciding on an optimal, new assignment plan. > > A design document describing the balancer can be found at > https://docs.google.com/document/d/1A8-eVeRhZjwL0hzFw9wmXl8cGP4BFomSlohX2QcaFg4/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27997) Enhance prefetch executor to record region prefetch information along with the list of hfiles prefetched
[ https://issues.apache.org/jira/browse/HBASE-27997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27997: - Fix Version/s: 4.0.0-alpha-1 > Enhance prefetch executor to record region prefetch information along with > the list of hfiles prefetched > > > Key: HBASE-27997 > URL: https://issues.apache.org/jira/browse/HBASE-27997 > Project: HBase > Issue Type: Sub-task > Components: BucketCache >Affects Versions: 2.6.0, 3.0.0-alpha-4 >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 4.0.0-alpha-1 > > > HBASE-27313 implemented the prefetch persistence feature where it persists > the list of hFiles prefetched in the bucket cache. This information is used > to reconstruct the cache in the event of a server restart/crash. > Currently, only the list of hFiles is persisted. > However, for the new PrefetchAwareLoadBalancer (work in progress) to work, we > need the information about how much a region is prefetched on a region server. > This Jira introduces an additional map in the prefetch executor to maintain > the information about how much a region has been prefetched on that region > server. The size of region server prefetched is calculated as the total size > of all hFiles prefetched for that region. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27998) Enhance region metrics to include prefetch ratio for each region
[ https://issues.apache.org/jira/browse/HBASE-27998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27998: - Fix Version/s: 4.0.0-alpha-1 > Enhance region metrics to include prefetch ratio for each region > > > Key: HBASE-27998 > URL: https://issues.apache.org/jira/browse/HBASE-27998 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: Rahul Agarkar >Assignee: Rahul Agarkar >Priority: Major > Fix For: 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] Rebase HBASE-27389 on branch-3 [hbase]
wchevreuil closed pull request #5533: Rebase HBASE-27389 on branch-3 URL: https://github.com/apache/hbase/pull/5533 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Minor improvements to the README of hbck2 [hbase-operator-tools]
lfrancke merged PR #138: URL: https://github.com/apache/hbase-operator-tools/pull/138 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Rebase HBASE-27389 on branch-3 [hbase]
Apache-HBase commented on PR #5533: URL: https://github.com/apache/hbase/pull/5533#issuecomment-1822738587 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 1s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-3 Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 47s | branch-3 passed | | +1 :green_heart: | compile | 4m 37s | branch-3 passed | | +1 :green_heart: | checkstyle | 1m 21s | branch-3 passed | | +1 :green_heart: | spotless | 0m 45s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 5m 19s | branch-3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 29s | root in the patch failed. | | -1 :x: | compile | 0m 7s | hbase-protocol-shaded in the patch failed. | | -1 :x: | compile | 0m 15s | hbase-client in the patch failed. | | -1 :x: | compile | 0m 12s | hbase-balancer in the patch failed. | | -1 :x: | compile | 0m 28s | hbase-server in the patch failed. | | -0 :warning: | cc | 0m 7s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | cc | 0m 15s | hbase-client in the patch failed. | | -0 :warning: | cc | 0m 12s | hbase-balancer in the patch failed. | | -0 :warning: | cc | 0m 28s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 7s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | javac | 0m 15s | hbase-client in the patch failed. | | -0 :warning: | javac | 0m 12s | hbase-balancer in the patch failed. | | -0 :warning: | javac | 0m 28s | hbase-server in the patch failed. | | -0 :warning: | checkstyle | 0m 8s | hbase-balancer: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | -1 :x: | hadoopcheck | 0m 9s | The patch causes 76 errors with Hadoop v3.2.4. | | -1 :x: | hadoopcheck | 0m 17s | The patch causes 76 errors with Hadoop v3.3.6. | | -1 :x: | hbaseprotoc | 0m 7s | hbase-protocol-shaded in the patch failed. | | -1 :x: | hbaseprotoc | 0m 16s | hbase-client in the patch failed. | | -1 :x: | hbaseprotoc | 0m 12s | hbase-balancer in the patch failed. | | -1 :x: | hbaseprotoc | 0m 26s | hbase-server in the patch failed. | | +1 :green_heart: | spotless | 0m 38s | patch has no errors when running spotless:check. | | -1 :x: | spotbugs | 0m 7s | hbase-protocol-shaded in the patch failed. | | -1 :x: | spotbugs | 0m 12s | hbase-client in the patch failed. | | -1 :x: | spotbugs | 0m 12s | hbase-balancer in the patch failed. | | -1 :x: | spotbugs | 0m 26s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 36s | The patch does not generate ASF License warnings. | | | | 24m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5533 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile cc hbaseprotoc prototool | | uname | Linux 928dee50a2f6 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-3 / 92f8066b3f | | Default Java | Eclipse Adoptium-11.0.17+8 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-general-check/output/patch-compile-hbase-protocol-shaded.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-general-check/output/patch-compile-hbase-client.txt | | compile |
Re: [PR] Rebase HBASE-27389 on branch-3 [hbase]
Apache-HBase commented on PR #5533: URL: https://github.com/apache/hbase/pull/5533#issuecomment-1822736862 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-3 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 59s | branch-3 passed | | +1 :green_heart: | compile | 2m 11s | branch-3 passed | | +1 :green_heart: | shadedjars | 5m 55s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 16s | branch-3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 29s | root in the patch failed. | | -1 :x: | compile | 0m 8s | hbase-protocol-shaded in the patch failed. | | -1 :x: | compile | 0m 18s | hbase-client in the patch failed. | | -1 :x: | compile | 0m 15s | hbase-balancer in the patch failed. | | -1 :x: | compile | 0m 31s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 8s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | javac | 0m 18s | hbase-client in the patch failed. | | -0 :warning: | javac | 0m 15s | hbase-balancer in the patch failed. | | -0 :warning: | javac | 0m 31s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 1m 45s | patch has 76 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 7s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | javadoc | 0m 28s | hbase-server generated 3 new + 22 unchanged - 0 fixed = 25 total (was 22) | ||| _ Other Tests _ | | -1 :x: | unit | 0m 8s | hbase-protocol-shaded in the patch failed. | | +1 :green_heart: | unit | 2m 42s | hbase-common in the patch passed. | | -1 :x: | unit | 0m 18s | hbase-client in the patch failed. | | -1 :x: | unit | 0m 13s | hbase-balancer in the patch failed. | | -1 :x: | unit | 0m 27s | hbase-server in the patch failed. | | | | 23m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5533 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b95a5c9d4f29 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-3 / 92f8066b3f | | Default Java | Temurin-1.8.0_352-b08 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-protocol-shaded.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-client.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-balancer.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-protocol-shaded.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-client.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-balancer.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | javadoc |
Re: [PR] Rebase HBASE-27389 on branch-3 [hbase]
Apache-HBase commented on PR #5533: URL: https://github.com/apache/hbase/pull/5533#issuecomment-1822733302 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-3 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 52s | branch-3 passed | | +1 :green_heart: | compile | 2m 2s | branch-3 passed | | +1 :green_heart: | shadedjars | 5m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 9s | branch-3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 28s | root in the patch failed. | | -1 :x: | compile | 0m 7s | hbase-protocol-shaded in the patch failed. | | -1 :x: | compile | 0m 15s | hbase-client in the patch failed. | | -1 :x: | compile | 0m 11s | hbase-balancer in the patch failed. | | -1 :x: | compile | 0m 27s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 7s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | javac | 0m 15s | hbase-client in the patch failed. | | -0 :warning: | javac | 0m 11s | hbase-balancer in the patch failed. | | -0 :warning: | javac | 0m 27s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 1m 24s | patch has 76 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 6s | hbase-protocol-shaded in the patch failed. | | -0 :warning: | javadoc | 0m 22s | hbase-server generated 3 new + 94 unchanged - 1 fixed = 97 total (was 95) | ||| _ Other Tests _ | | -1 :x: | unit | 0m 7s | hbase-protocol-shaded in the patch failed. | | +1 :green_heart: | unit | 2m 16s | hbase-common in the patch passed. | | -1 :x: | unit | 0m 15s | hbase-client in the patch failed. | | -1 :x: | unit | 0m 12s | hbase-balancer in the patch failed. | | -1 :x: | unit | 0m 27s | hbase-server in the patch failed. | | | | 21m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5533 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 43bff4b5be91 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-3 / 92f8066b3f | | Default Java | Eclipse Adoptium-11.0.17+8 | | mvninstall | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-protocol-shaded.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-balancer.txt | | compile | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-protocol-shaded.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-balancer.txt | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5533/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | javadoc |
[jira] [Updated] (HBASE-28213) Evalue using hbase-shaded-client-byo-hadoop for Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28213: Description: Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. While we don't actually specify what HBase libraries are needed on the Spark client side for the connector, at least the Cloudera docs specify the classes provided by "hbase mapredcp" which includes the full unshaded Hadoop JAR set. Investigate whether *hbase-shaded-client-byo-hadoop* and the *hbase-client-api* and *hbase-client-runtime* is enough for the connector, and if yes, document how to set the Spark classpath. Alternatively, if *hbase-shaded-client-byo-hadoop* is not enough, check if *hbase-shaded-mapreduce* plus the above two shaded Hadoop client JAR provides everything needed. was: Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. While we don't actually specify what HBase libraries are needed on the Spark client side for the connector, at least the Cloudera docs specify the classes provided by "hbase mapredcp" which includes the full unshaded Hadoop JAR set. Investigate whether *hbase-shaded-client-byo-hadoop* and the *hbase-client-api* and *hbase-client-runtime* is enough for the connector, and if yes, document how to set the Spark classpath. > Evalue using hbase-shaded-client-byo-hadoop for Spark connector > --- > > Key: HBASE-28213 > URL: https://issues.apache.org/jira/browse/HBASE-28213 > Project: HBase > Issue Type: Improvement > Components: spark >Reporter: Istvan Toth >Priority: Major > > Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. > While we don't actually specify what HBase libraries are needed on the Spark > client side for the connector, at least the Cloudera docs specify the classes > provided by "hbase mapredcp" > which includes the full unshaded Hadoop JAR set. > Investigate whether *hbase-shaded-client-byo-hadoop* and the > *hbase-client-api* and *hbase-client-runtime* is enough for the connector, > and if yes, document how to set the Spark classpath. > Alternatively, if *hbase-shaded-client-byo-hadoop* is not enough, check if > *hbase-shaded-mapreduce* plus the above two shaded Hadoop client JAR provides > everything needed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28214) Document Spark classpath requirements for the Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28214: Description: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries (and seem to depend on the on the existing Hadoop JARs on the Spark classpath). Figure this out, and update the documentation. was: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. (and seem to depend on the Hadoop JARs included in the hbase-shaded-mapreduce JAR, or perhaps on the existing Hadoop JARs on the Spark classpath, depending on the classpath ordering and the phase of the moon). Figure this out, and update the documentation. > Document Spark classpath requirements for the Spark connector > - > > Key: HBASE-28214 > URL: https://issues.apache.org/jira/browse/HBASE-28214 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Istvan Toth >Priority: Major > > The README for the Spark connector details the classpath requirements for the > HBase server side, but does not talk about how to set up the Spark classpath > for HBase. > The Cloudera docs > [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] > suggest using "hbase mapredcp" It is, however inconsistent, as "hbase > mapredcp" includes the unshaded hadoop libraries, while the example command > line omits the hadoop libraries (and seem to depend on the on the existing > Hadoop JARs on the Spark classpath). > Figure this out, and update the documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28214) Document Spark classpath requirements for the Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788756#comment-17788756 ] Istvan Toth commented on HBASE-28214: - Ideally, we would not add redundant hadoop classes to the classpath. > Document Spark classpath requirements for the Spark connector > - > > Key: HBASE-28214 > URL: https://issues.apache.org/jira/browse/HBASE-28214 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Istvan Toth >Priority: Major > > The README for the Spark connector details the classpath requirements for the > HBase server side, but does not talk about how to set up the Spark classpath > for HBase. > The Cloudera docs > [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] > suggest using "hbase mapredcp" It is, however inconsistent, as "hbase > mapredcp" includes the unshaded hadoop libraries, while the example command > line omits the hadoop libraries (and seem to depend on the on the existing > Hadoop JARs on the Spark classpath). > Figure this out, and update the documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28214) Document Spark classpath requirements for the Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28214: Description: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. (and seem to depend on the Hadoop JARs included in the hbase-shaded-mapreduce JAR, or perhaps on the existing Hadoop JARs on the Spark classpath, depending on the classpath ordering and the phase of the moon). Figure this out, and update the documentation. was: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. (And depends on the Hadoop JARs already available on the Figure this out, and update the documentation. > Document Spark classpath requirements for the Spark connector > - > > Key: HBASE-28214 > URL: https://issues.apache.org/jira/browse/HBASE-28214 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Istvan Toth >Priority: Major > > The README for the Spark connector details the classpath requirements for the > HBase server side, but does not talk about how to set up the Spark classpath > for HBase. > The Cloudera docs > [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] > suggest using "hbase mapredcp" It is, however inconsistent, as "hbase > mapredcp" includes the unshaded hadoop libraries, while the example command > line omits the hadoop libraries. > (and seem to depend on the Hadoop JARs included in the hbase-shaded-mapreduce > JAR, or perhaps on the existing Hadoop JARs on the Spark classpath, depending > on the classpath ordering and the phase of the moon). > Figure this out, and update the documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28214) Document Spark classpath requirements for the Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28214: Description: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. (And depends on the Hadoop JARs already available on the Figure this out, and update the documentation. was: The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. Figure this out, and update the documentation. > Document Spark classpath requirements for the Spark connector > - > > Key: HBASE-28214 > URL: https://issues.apache.org/jira/browse/HBASE-28214 > Project: HBase > Issue Type: Bug > Components: spark >Reporter: Istvan Toth >Priority: Major > > The README for the Spark connector details the classpath requirements for the > HBase server side, but does not talk about how to set up the Spark classpath > for HBase. > The Cloudera docs > [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] > suggest using "hbase mapredcp" It is, however inconsistent, as "hbase > mapredcp" includes the unshaded hadoop libraries, while the example command > line omits the hadoop libraries. (And depends on the Hadoop JARs already > available on the > Figure this out, and update the documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28213) Evalue using hbase-shaded-client-byo-hadoop for Spark connector
[ https://issues.apache.org/jira/browse/HBASE-28213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28213: Description: Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. While we don't actually specify what HBase libraries are needed on the Spark client side for the connector, at least the Cloudera docs specify the classes provided by "hbase mapredcp" which includes the full unshaded Hadoop JAR set. Investigate whether *hbase-shaded-client-byo-hadoop* and the *hbase-client-api* and *hbase-client-runtime* is enough for the connector, and if yes, document how to set the Spark classpath. was: Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. While we don't actually specify what HBase libraries are needed on the Spark client side for the connector, at least the Cloudera docs specify the classes provided by "hbase mapredcp" which includes the full unshaded Hadoop JAR set. Investigate whether *hbase-shaded-client-byo-hadoop* and the *hbase-client-api* and *hbase-client-runtime __* is enough for the connector, and if yes, document how to set the Spark classpath. > Evalue using hbase-shaded-client-byo-hadoop for Spark connector > --- > > Key: HBASE-28213 > URL: https://issues.apache.org/jira/browse/HBASE-28213 > Project: HBase > Issue Type: Improvement > Components: spark >Reporter: Istvan Toth >Priority: Major > > Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. > While we don't actually specify what HBase libraries are needed on the Spark > client side for the connector, at least the Cloudera docs specify the classes > provided by "hbase mapredcp" > which includes the full unshaded Hadoop JAR set. > Investigate whether *hbase-shaded-client-byo-hadoop* and the > *hbase-client-api* and *hbase-client-runtime* is enough for the connector, > and if yes, document how to set the Spark classpath. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] Rebase HBASE-27389 on branch-3 [hbase]
wchevreuil opened a new pull request, #5533: URL: https://github.com/apache/hbase/pull/5533 I'm opening this to make sure precommits pass, before merging the commit in branch-3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28214) Document Spark classpath requirements for the Spark connector
Istvan Toth created HBASE-28214: --- Summary: Document Spark classpath requirements for the Spark connector Key: HBASE-28214 URL: https://issues.apache.org/jira/browse/HBASE-28214 Project: HBase Issue Type: Bug Components: spark Reporter: Istvan Toth The README for the Spark connector details the classpath requirements for the HBase server side, but does not talk about how to set up the Spark classpath for HBase. The Cloudera docs [https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/accessing-hbase/topics/hbase-configure-spark-connector.html] suggest using "hbase mapredcp" It is, however inconsistent, as "hbase mapredcp" includes the unshaded hadoop libraries, while the example command line omits the hadoop libraries. Figure this out, and update the documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28213) Evalue using hbase-shaded-client-byo-hadoop for Spark connector
Istvan Toth created HBASE-28213: --- Summary: Evalue using hbase-shaded-client-byo-hadoop for Spark connector Key: HBASE-28213 URL: https://issues.apache.org/jira/browse/HBASE-28213 Project: HBase Issue Type: Improvement Components: spark Reporter: Istvan Toth Since 3.2 Spark now uses hadoop-client-api and hadoop-client-runtime. While we don't actually specify what HBase libraries are needed on the Spark client side for the connector, at least the Cloudera docs specify the classes provided by "hbase mapredcp" which includes the full unshaded Hadoop JAR set. Investigate whether *hbase-shaded-client-byo-hadoop* and the *hbase-client-api* and *hbase-client-runtime __* is enough for the connector, and if yes, document how to set the Spark classpath. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822563906 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 3s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/5/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822565402 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 3s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/5/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822562225 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 2s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822559235 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 2s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/4/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822559218 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 3s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/4/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28211 BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated [hbase]
Apache-HBase commented on PR #5530: URL: https://github.com/apache/hbase/pull/5530#issuecomment-1822558986 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 3s | https://github.com/apache/hbase/pull/5530 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/5530 | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5530/4/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28120) Provide the switch to avoid reopening regions in the alter sync command
[ https://issues.apache.org/jira/browse/HBASE-28120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788673#comment-17788673 ] Zhuoyue Huang commented on HBASE-28120: --- [~gourab.taparia] Are you still working on it? As [~bbeaudreault] said, maybe you can do a back port for branch-2 in the jira of HBASE-25549. > Provide the switch to avoid reopening regions in the alter sync command > --- > > Key: HBASE-28120 > URL: https://issues.apache.org/jira/browse/HBASE-28120 > Project: HBase > Issue Type: Sub-task > Components: master, shell >Affects Versions: 2.0.0-alpha-1 >Reporter: Gourab Taparia >Assignee: Gourab Taparia >Priority: Major > Fix For: 2.6.0 > > > As part of the sub-task, as HBase 2 supports both Async and Sync API, this > task is to add this support/feature to HBase 2's Sync API. -- This message was sent by Atlassian Jira (v8.20.10#820010)