Re: [PR] HBASE-28469: Integration of time-based priority caching into compaction paths [hbase]
vinayakphegde commented on code in PR #5866: URL: https://github.com/apache/hbase/pull/5866#discussion_r1593422387 ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: ## @@ -555,16 +563,33 @@ private void writeInlineBlocks(boolean closing) throws IOException { private void doCacheOnWrite(long offset) { cacheConf.getBlockCache().ifPresent(cache -> { HFileBlock cacheFormatBlock = blockWriter.getBlockForCaching(cacheConf); + BlockCacheKey key = buildBlockCacheKey(offset, cacheFormatBlock); + if (!shouldCacheBlock(cache, key)) { Review Comment: > which would go all the way to fetch the file metadata and decide if the file is hot or not. Yes, it will look up the Configuration which corresponds to the store to which this file belongs to from the map. However, it won't read any file-related metadata, since we are already passing the maxTimestamp here. > Would this be called per block? Yes, it is called for every block. > would it impact the performance adversely? Yes, I think it will add some performance overhead. > Can we somehow restrict these traversals to once per file instead of once per block. I couldn't think of any better solution at the moment. ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: ## @@ -555,16 +563,33 @@ private void writeInlineBlocks(boolean closing) throws IOException { private void doCacheOnWrite(long offset) { cacheConf.getBlockCache().ifPresent(cache -> { HFileBlock cacheFormatBlock = blockWriter.getBlockForCaching(cacheConf); + BlockCacheKey key = buildBlockCacheKey(offset, cacheFormatBlock); + if (!shouldCacheBlock(cache, key)) { Review Comment: > which would go all the way to fetch the file metadata and decide if the file is hot or not. Yes, it will look up the Configuration which corresponds to the store to which this file belongs to from the map. However, it won't read any file-related metadata, since we are already passing the maxTimestamp here. > Would this be called per block? Yes, it is called for every block. > would it impact the performance adversely? Yes, I think it will add some performance overhead. > Can we somehow restrict these traversals to once per file instead of once per block. I couldn't think of any better solution at the moment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28469: Integration of time-based priority caching into compaction paths [hbase]
vinayakphegde commented on code in PR #5866: URL: https://github.com/apache/hbase/pull/5866#discussion_r1593422387 ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: ## @@ -555,16 +563,33 @@ private void writeInlineBlocks(boolean closing) throws IOException { private void doCacheOnWrite(long offset) { cacheConf.getBlockCache().ifPresent(cache -> { HFileBlock cacheFormatBlock = blockWriter.getBlockForCaching(cacheConf); + BlockCacheKey key = buildBlockCacheKey(offset, cacheFormatBlock); + if (!shouldCacheBlock(cache, key)) { Review Comment: > which would go all the way to fetch the file metadata and decide if the file is hot or not. Yes, it will look up the Configuration which corresponds to the store to which this file belongs to from the map. However, it won't read any file-related metadata, since we are already passing the maxTimestamp here. > Would this be called per block? Yes, it is called for every block. > would it impact the performance adversely? Yes, I think it will add some performance overhead. > Can we somehow restrict these traversals to once per file instead of once per block. I couldn't think of any better solution at the moment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28574) Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests
[ https://issues.apache.org/jira/browse/HBASE-28574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844532#comment-17844532 ] Hudson commented on HBASE-28574: Results for branch branch-2.4 [build #732 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests > --- > > Key: HBASE-28574 > URL: https://issues.apache.org/jira/browse/HBASE-28574 > Project: HBase > Issue Type: Task > Components: dependabot, scripts, security >Reporter: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844531#comment-17844531 ] Hudson commented on HBASE-28556: Results for branch branch-2.4 [build #732 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/732/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844527#comment-17844527 ] Hudson commented on HBASE-28556: Results for branch branch-2.6 [build #112 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28574) Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests
[ https://issues.apache.org/jira/browse/HBASE-28574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844528#comment-17844528 ] Hudson commented on HBASE-28574: Results for branch branch-2.6 [build #112 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/112/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests > --- > > Key: HBASE-28574 > URL: https://issues.apache.org/jira/browse/HBASE-28574 > Project: HBase > Issue Type: Task > Components: dependabot, scripts, security >Reporter: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28563 Closing ZooKeeper in ZKMainServer [hbase]
mwkang commented on code in PR #5869: URL: https://github.com/apache/hbase/pull/5869#discussion_r1593276932 ## hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKMainServer.java: ## @@ -41,7 +42,8 @@ public String parse(final Configuration c) { * ZooKeeper 3.4.6 broke being able to pass commands on command line. See ZOOKEEPER-1897. This * class is a hack to restore this faclity. */ - private static class HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain extends ZooKeeperMain { + private static class HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain extends ZooKeeperMain + implements Closeable { Review Comment: I'm pleased to inform that the code review has been approved and the CI checks have passed successfully. If possible, would you kindly merge this code? I apologize for any inconvenience, but I don't have the necessary permissions to perform the merge myself. Alternatively, if there are any additional steps required to proceed with the merge, please advise me. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28084) Incremental backups should be forbidden after deleting backups
[ https://issues.apache.org/jira/browse/HBASE-28084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844462#comment-17844462 ] Ray Mattingly commented on HBASE-28084: --- {quote}Imagine I have a set of backups (Full1, Incr2, Incr3), delete the last backup (Incr3), and then create a new incremental backup (Incr4). This backup history will now show: Full1, Incr2, Incr4. {quote} In this case shouldn't Incr4 recognize that the last existing backup is Incr2, so Incr4 would cover from the end of Incr2 to now if possible (WALs are still around?) or throw if this is not possible? I'm concerned that this proposal would mean that any incremental backup corruption or ungraceful failures would necessitate a new full backup. A nicer UX would be to delete the bad incremental backup, and try again > Incremental backups should be forbidden after deleting backups > -- > > Key: HBASE-28084 > URL: https://issues.apache.org/jira/browse/HBASE-28084 > Project: HBase > Issue Type: Bug > Components: backuprestore >Reporter: Dieter De Paepe >Priority: Major > > Imagine I have a set of backups (Full1, Incr2, Incr3), delete the last backup > (Incr3), and then create a new incremental backup (Incr4). > This backup history will now show: Full1, Incr2, Incr4. > However, restoring Incr4 will not contain the data that was captured in > Incr3, effectively leading to data loss. This will certainly surprise some > users. > I suggest to add some internal bookkeeping to prevent incremental backups in > case the most recent backup was deleted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28566) Remove ZKDataMigrator
[ https://issues.apache.org/jira/browse/HBASE-28566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844454#comment-17844454 ] Hudson commented on HBASE-28566: Results for branch branch-3 [build #200 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove ZKDataMigrator > - > > Key: HBASE-28566 > URL: https://issues.apache.org/jira/browse/HBASE-28566 > Project: HBase > Issue Type: Sub-task > Components: Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844455#comment-17844455 ] Hudson commented on HBASE-28556: Results for branch branch-3 [build #200 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/200/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28562 Correct backup ancestor calculation [hbase]
rmdmattingly commented on code in PR #5868: URL: https://github.com/apache/hbase/pull/5868#discussion_r1593126783 ## hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java: ## @@ -295,51 +296,32 @@ public ArrayList getAncestors(BackupInfo backupInfo) throws IOExcep .withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames()) .withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build(); - // Only direct ancestors for a backup are required and not entire history of backup for this - // table resulting in verifying all of the previous backups which is unnecessary and backup - // paths need not be valid beyond the lifetime of a backup. - // - // RootDir is way of grouping a single backup including one full and many incremental backups + // If the image has a different rootDir, it cannot be an ancestor. if (!image.getRootDir().equals(backupInfo.getBackupRootDir())) { continue; } - // add the full backup image as an ancestor until the last incremental backup - if (backup.getType().equals(BackupType.FULL)) { -// check the backup image coverage, if previous image could be covered by the newer ones, -// then no need to add -if (!BackupManifest.canCoverImage(ancestors, image)) { - ancestors.add(image); -} + // The ancestors consist of the most recent FULL backups that cover the list of tables + // required in the new backup and all INCREMENTAL backups that came after one of those FULL + // backups. Review Comment: Agreed with all of the reasoning here -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28562 Correct backup ancestor calculation [hbase]
rmdmattingly commented on code in PR #5868: URL: https://github.com/apache/hbase/pull/5868#discussion_r1593124868 ## hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java: ## @@ -295,51 +298,31 @@ public ArrayList getAncestors(BackupInfo backupInfo) throws IOExcep .withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames()) .withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build(); - // Only direct ancestors for a backup are required and not entire history of backup for this - // table resulting in verifying all of the previous backups which is unnecessary and backup - // paths need not be valid beyond the lifetime of a backup. - // - // RootDir is way of grouping a single backup including one full and many incremental backups + // If the image has a different rootDir, it cannot be an ancestor. if (!image.getRootDir().equals(backupInfo.getBackupRootDir())) { continue; } - // add the full backup image as an ancestor until the last incremental backup - if (backup.getType().equals(BackupType.FULL)) { -// check the backup image coverage, if previous image could be covered by the newer ones, -// then no need to add -if (!BackupManifest.canCoverImage(ancestors, image)) { - ancestors.add(image); -} + // The ancestors consist of the most recent FULL backups that cover the list of tables + // required in the new backup and all INCREMENTAL backups that came after one of those FULL + // backups. + if (backup.getType().equals(BackupType.INCREMENTAL)) { Review Comment: Interesting, so the incremental backups are identical regardless of the table set input from the user? Maybe this API needs a refactor... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28575 Always printing error log when snapshot table [hbase]
Apache-HBase commented on PR #5880: URL: https://github.com/apache/hbase/pull/5880#issuecomment-2099295637 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 29s | master passed | | +1 :green_heart: | compile | 0m 55s | master passed | | +1 :green_heart: | shadedjars | 5m 50s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 32s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 21s | the patch passed | | +1 :green_heart: | compile | 1m 7s | the patch passed | | +1 :green_heart: | javac | 1m 7s | the patch passed | | +1 :green_heart: | shadedjars | 7m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 276m 7s | hbase-server in the patch passed. | | | | 304m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5880 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0df7a39a9dc6 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / c4f01ede67 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/testReport/ | | Max. process+thread count | 4717 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1783#comment-1783 ] Hudson commented on HBASE-28556: Results for branch master [build #1068 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28566) Remove ZKDataMigrator
[ https://issues.apache.org/jira/browse/HBASE-28566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1782#comment-1782 ] Hudson commented on HBASE-28566: Results for branch master [build #1068 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1068/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove ZKDataMigrator > - > > Key: HBASE-28566 > URL: https://issues.apache.org/jira/browse/HBASE-28566 > Project: HBase > Issue Type: Sub-task > Components: Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844436#comment-17844436 ] Hudson commented on HBASE-28556: Results for branch branch-2.5 [build #524 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/524/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/524/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/524/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/524/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/524/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28575 Always printing error log when snapshot table [hbase]
Apache-HBase commented on PR #5880: URL: https://github.com/apache/hbase/pull/5880#issuecomment-2099212082 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 46s | master passed | | +1 :green_heart: | compile | 0m 43s | master passed | | +1 :green_heart: | shadedjars | 5m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 27s | the patch passed | | +1 :green_heart: | compile | 0m 43s | the patch passed | | +1 :green_heart: | javac | 0m 43s | the patch passed | | +1 :green_heart: | shadedjars | 5m 7s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 233m 13s | hbase-server in the patch passed. | | | | 255m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5880 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c79c3c4ad5e4 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / c4f01ede67 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/testReport/ | | Max. process+thread count | 5098 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28575 Always printing error log when snapshot table [hbase]
Apache-HBase commented on PR #5880: URL: https://github.com/apache/hbase/pull/5880#issuecomment-2099166498 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 54s | master passed | | +1 :green_heart: | compile | 0m 49s | master passed | | +1 :green_heart: | shadedjars | 5m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 43s | the patch passed | | +1 :green_heart: | compile | 0m 51s | the patch passed | | +1 :green_heart: | javac | 0m 51s | the patch passed | | +1 :green_heart: | shadedjars | 5m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 205m 33s | hbase-server in the patch passed. | | | | 228m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5880 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 37c47f3f1be4 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / c4f01ede67 | | Default Java | Eclipse Adoptium-17.0.10+7 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/testReport/ | | Max. process+thread count | 5040 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28570 Remove deprecated fields in HBTU [hbase]
Apache-HBase commented on PR #5877: URL: https://github.com/apache/hbase/pull/5877#issuecomment-2098895814 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 19s | master passed | | +1 :green_heart: | compile | 1m 12s | master passed | | +1 :green_heart: | shadedjars | 6m 35s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 48s | the patch passed | | +1 :green_heart: | compile | 0m 58s | the patch passed | | +1 :green_heart: | javac | 0m 58s | the patch passed | | +1 :green_heart: | shadedjars | 6m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 269m 55s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 3m 30s | hbase-testing-util in the patch passed. | | | | 301m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5877 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1b42058fe176 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 156e430dc5 | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/testReport/ | | Max. process+thread count | 4410 (vs. ulimit of 3) | | modules | C: hbase-server hbase-testing-util U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28539 Fix merging of incremental backups when the backup filesystem is not the same as the one underpinning HBase itself. [hbase]
rmdmattingly commented on code in PR #5867: URL: https://github.com/apache/hbase/pull/5867#discussion_r1592790557 ## hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestBackupMerge.java: ## @@ -124,4 +128,44 @@ public void TestIncBackupMergeRestore() throws Exception { admin.close(); conn.close(); } + + @Test + public void TestIncBackupMergeRestoreSeparateFs() throws Exception { + +// prepare BACKUP_ROOT_DIR on a different filesystem from HBase +File tempDir = new File(FileUtils.getTempDirectory(), UUID.randomUUID().toString()); +tempDir.deleteOnExit(); +BACKUP_ROOT_DIR = tempDir.toURI().toString(); +System.out.println(BACKUP_ROOT_DIR); Review Comment: It's not totally unheard of to print logs like this, especially in our test files, but maybe we should just use the logger? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28570 Remove deprecated fields in HBTU [hbase]
Apache-HBase commented on PR #5877: URL: https://github.com/apache/hbase/pull/5877#issuecomment-2098854132 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | master passed | | +1 :green_heart: | compile | 1m 28s | master passed | | +1 :green_heart: | shadedjars | 6m 26s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 34s | the patch passed | | +1 :green_heart: | compile | 1m 17s | the patch passed | | +1 :green_heart: | javac | 1m 17s | the patch passed | | +1 :green_heart: | shadedjars | 6m 38s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 46s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 244m 43s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 2m 26s | hbase-testing-util in the patch passed. | | | | 277m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5877 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 57e8641c9901 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 156e430dc5 | | Default Java | Eclipse Adoptium-17.0.10+7 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/testReport/ | | Max. process+thread count | 4490 (vs. ulimit of 3) | | modules | C: hbase-server hbase-testing-util U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28575 Always printing error log when snapshot table [hbase]
Apache-HBase commented on PR #5880: URL: https://github.com/apache/hbase/pull/5880#issuecomment-2098845604 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 30s | master passed | | +1 :green_heart: | compile | 3m 52s | master passed | | +1 :green_heart: | checkstyle | 0m 40s | master passed | | +1 :green_heart: | spotless | 0m 46s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 33s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 49s | the patch passed | | +1 :green_heart: | compile | 2m 42s | the patch passed | | +1 :green_heart: | javac | 2m 42s | the patch passed | | +1 :green_heart: | checkstyle | 0m 34s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 5m 11s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 41s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 32m 22s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5880 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 4f067aac42a3 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / c4f01ede67 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 80 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5880/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28570 Remove deprecated fields in HBTU [hbase]
Apache-HBase commented on PR #5877: URL: https://github.com/apache/hbase/pull/5877#issuecomment-2098811999 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 57s | master passed | | +1 :green_heart: | compile | 0m 58s | master passed | | +1 :green_heart: | shadedjars | 5m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 45s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | shadedjars | 6m 26s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 225m 42s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 2m 38s | hbase-testing-util in the patch passed. | | | | 255m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5877 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 805bae13c9d8 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 156e430dc5 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/testReport/ | | Max. process+thread count | 5370 (vs. ulimit of 3) | | modules | C: hbase-server hbase-testing-util U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28575 Always printing error log when snapshot table [hbase]
guluo2016 commented on code in PR #5880: URL: https://github.com/apache/hbase/pull/5880#discussion_r1592695817 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java: ## @@ -252,11 +252,11 @@ public void process() { try { // if the working dir is still present, the snapshot has failed. it is present we delete // it. -if (!workingDirFs.delete(workingDir, true)) { - LOG.error("Couldn't delete snapshot working directory:" + workingDir); +if (workingDirFs.exists(workingDir) && !workingDirFs.delete(workingDir, true)) { + LOG.error("Couldn't delete snapshot working directory: {}", workingDir); } } catch (IOException e) { -LOG.error("Couldn't delete snapshot working directory:" + workingDir); +LOG.error("Couldn't get or delete snapshot working directory: {}", workingDir, e); Review Comment: Printing exception stack if hbase throws exception in here -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-28575) Always printing error log when snapshot table
[ https://issues.apache.org/jira/browse/HBASE-28575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28575: --- Labels: pull-request-available (was: ) > Always printing error log when snapshot table > -- > > Key: HBASE-28575 > URL: https://issues.apache.org/jira/browse/HBASE-28575 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.4.13 > Environment: hbase2.4.13 > Centos7 >Reporter: guluo >Assignee: guluo >Priority: Minor > Labels: pull-request-available > > Reproduction. > 1. > Disable snapshot procedure if your hbase support snapshot procedure feature > Set hbase.snapshot.procedure.enabled to false to disable snapshot procedure. > 2. > Executing snapshot against a table, this step is no problem > snapshot 't01', 'sn0001' > 3. > HBase outputs error logs, as follow. > 2024-05-07T23:16:37,175 ERROR > [MASTER_SNAPSHOT_OPERATIONS-master/archlinux:16000-0] > snapshot.TakeSnapshotHandler: Couldn't delete snapshot working > directory:file:/opt/hbase/hbase-4.0.0-alpha-1-SNAPSHOT/tmp/hbase/.hbase-snapshot/.tmp/sn001 > > The Reason. > HBase would clean tmp of the snapshot after snapshot. > The tmp would be empty if snapshot was executed successfully > We would get false when calling `Filesystem.delete()` to delete the tmp which > does not exist, so hbase outputs error logs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28575) Always printing error log when snapshot table
guluo created HBASE-28575: - Summary: Always printing error log when snapshot table Key: HBASE-28575 URL: https://issues.apache.org/jira/browse/HBASE-28575 Project: HBase Issue Type: Bug Components: snapshots Affects Versions: 2.4.13 Environment: hbase2.4.13 Centos7 Reporter: guluo Assignee: guluo Reproduction. 1. Disable snapshot procedure if your hbase support snapshot procedure feature Set hbase.snapshot.procedure.enabled to false to disable snapshot procedure. 2. Executing snapshot against a table, this step is no problem snapshot 't01', 'sn0001' 3. HBase outputs error logs, as follow. 2024-05-07T23:16:37,175 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/archlinux:16000-0] snapshot.TakeSnapshotHandler: Couldn't delete snapshot working directory:file:/opt/hbase/hbase-4.0.0-alpha-1-SNAPSHOT/tmp/hbase/.hbase-snapshot/.tmp/sn001 The Reason. HBase would clean tmp of the snapshot after snapshot. The tmp would be empty if snapshot was executed successfully We would get false when calling `Filesystem.delete()` to delete the tmp which does not exist, so hbase outputs error logs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28522) UNASSIGN proc indefinitely stuck on dead rs
[ https://issues.apache.org/jira/browse/HBASE-28522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prathyusha reassigned HBASE-28522: -- Assignee: Prathyusha > UNASSIGN proc indefinitely stuck on dead rs > --- > > Key: HBASE-28522 > URL: https://issues.apache.org/jira/browse/HBASE-28522 > Project: HBase > Issue Type: Improvement > Components: proc-v2 >Reporter: Prathyusha >Assignee: Prathyusha >Priority: Minor > > One scenario we noticed in production - > we had DisableTableProc and SCP almost triggered at similar time > 2024-03-16 17:59:23,014 INFO [PEWorker-11] procedure.DisableTableProcedure - > Set to state=DISABLING > 2024-03-16 17:59:15,243 INFO [PEWorker-26] procedure.ServerCrashProcedure - > Start pid=21592440, state=RUNNABLE:SERVER_CRASH_START, locked=true; > ServerCrashProcedure > , splitWal=true, meta=false > DisabeTableProc creates unassign procs, and at this time ASSIGNs of SCP is > not completed > {{2024-03-16 17:59:23,003 DEBUG [PEWorker-40] procedure2.ProcedureExecutor - > LOCK_EVENT_WAIT pid=21594220, ppid=21592440, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; > TransitRegionStateProcedure table=, region=, ASSIGN}} > UNASSIGN created by DisableTableProc is stuck on the dead regionserver and we > had to manually bypass unassign of DisableTableProc and then do ASSIGN. > If we can break the loop for UNASSIGN procedure to not retry if there is scp > for that server, we do not need manual intervention?, at least the > DisableTableProc can go to a rollback state? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28571) Remove deprecated methods map reduce utils
[ https://issues.apache.org/jira/browse/HBASE-28571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28571: -- Fix Version/s: 3.0.0-beta-2 Hadoop Flags: Reviewed Release Note: Removed these methods from TableMapReduceUtil initCredentialsForCluster(Job, String) addDependencyJars(Configuration, Class...) Resolution: Fixed Status: Resolved (was: Patch Available) > Remove deprecated methods map reduce utils > -- > > Key: HBASE-28571 > URL: https://issues.apache.org/jira/browse/HBASE-28571 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28571) Remove deprecated methods map reduce utils
[ https://issues.apache.org/jira/browse/HBASE-28571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844313#comment-17844313 ] Duo Zhang commented on HBASE-28571: --- Pushed to master and branch-3. Thanks [~sunxin] for reviewing! > Remove deprecated methods map reduce utils > -- > > Key: HBASE-28571 > URL: https://issues.apache.org/jira/browse/HBASE-28571 > Project: HBase > Issue Type: Sub-task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28571 Remove deprecated methods map reduce utils [hbase]
Apache9 merged PR #5878: URL: https://github.com/apache/hbase/pull/5878 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-28574) Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests
[ https://issues.apache.org/jira/browse/HBASE-28574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28574. --- Fix Version/s: 2.4.18 2.7.0 3.0.0-beta-2 2.6.1 2.5.9 Hadoop Flags: Reviewed Resolution: Fixed Pushed to all active branches. > Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests > --- > > Key: HBASE-28574 > URL: https://issues.apache.org/jira/browse/HBASE-28574 > Project: HBase > Issue Type: Task > Components: dependabot, scripts, security >Reporter: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28574) Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests
[ https://issues.apache.org/jira/browse/HBASE-28574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28574: --- Labels: pull-request-available (was: ) > Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests > --- > > Key: HBASE-28574 > URL: https://issues.apache.org/jira/browse/HBASE-28574 > Project: HBase > Issue Type: Task > Components: dependabot, scripts, security >Reporter: Duo Zhang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28574) Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests
Duo Zhang created HBASE-28574: - Summary: Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests Key: HBASE-28574 URL: https://issues.apache.org/jira/browse/HBASE-28574 Project: HBase Issue Type: Task Components: dependabot, scripts, security Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28574 Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests [hbase]
Apache9 merged PR #5879: URL: https://github.com/apache/hbase/pull/5879 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-28459) HFileOutputFormat2 ClassCastException with s3 magic committer
[ https://issues.apache.org/jira/browse/HBASE-28459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28459. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to all active branches. Thanks [~ksravista] for contributing! > HFileOutputFormat2 ClassCastException with s3 magic committer > - > > Key: HBASE-28459 > URL: https://issues.apache.org/jira/browse/HBASE-28459 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Assignee: Sravi Kommineni >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > In hadoop3 there's the s3 magic committer which can speed up s3 writes > dramatically. In HFileOutputFormat2.createRecordWriter we cast the passed in > committer as a FileOutputCommitter. This causes a class cast exception when > the s3 magic committer is enabled: > > {code:java} > Error: java.lang.ClassCastException: class > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter cannot be cast to > class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter {code} > > We can cast to PathOutputCommitter instead, but its only available in > hadoop3+. So we will need to use reflection to work around this in branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28459) HFileOutputFormat2 ClassCastException with s3 magic committer
[ https://issues.apache.org/jira/browse/HBASE-28459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844288#comment-17844288 ] Duo Zhang commented on HBASE-28459: --- Good. Thanks. Let me resolve. > HFileOutputFormat2 ClassCastException with s3 magic committer > - > > Key: HBASE-28459 > URL: https://issues.apache.org/jira/browse/HBASE-28459 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Assignee: Sravi Kommineni >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > In hadoop3 there's the s3 magic committer which can speed up s3 writes > dramatically. In HFileOutputFormat2.createRecordWriter we cast the passed in > committer as a FileOutputCommitter. This causes a class cast exception when > the s3 magic committer is enabled: > > {code:java} > Error: java.lang.ClassCastException: class > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter cannot be cast to > class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter {code} > > We can cast to PathOutputCommitter instead, but its only available in > hadoop3+. So we will need to use reflection to work around this in branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28459 HFileOutputFormat2 ClassCastException with s3 magic committer [hbase]
ksravista commented on PR #5851: URL: https://github.com/apache/hbase/pull/5851#issuecomment-2098366151 Yes, my username is `ksravista` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28459) HFileOutputFormat2 ClassCastException with s3 magic committer
[ https://issues.apache.org/jira/browse/HBASE-28459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844284#comment-17844284 ] Bryan Beaudreault commented on HBASE-28459: --- [~zhangduo] , The user name is [~ksravista] . Their user request had been missed. I just approved it, added as contributor, and assigned to this jira. > HFileOutputFormat2 ClassCastException with s3 magic committer > - > > Key: HBASE-28459 > URL: https://issues.apache.org/jira/browse/HBASE-28459 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Assignee: Sravi Kommineni >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > In hadoop3 there's the s3 magic committer which can speed up s3 writes > dramatically. In HFileOutputFormat2.createRecordWriter we cast the passed in > committer as a FileOutputCommitter. This causes a class cast exception when > the s3 magic committer is enabled: > > {code:java} > Error: java.lang.ClassCastException: class > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter cannot be cast to > class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter {code} > > We can cast to PathOutputCommitter instead, but its only available in > hadoop3+. So we will need to use reflection to work around this in branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28459) HFileOutputFormat2 ClassCastException with s3 magic committer
[ https://issues.apache.org/jira/browse/HBASE-28459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Beaudreault reassigned HBASE-28459: - Assignee: Sravi Kommineni > HFileOutputFormat2 ClassCastException with s3 magic committer > - > > Key: HBASE-28459 > URL: https://issues.apache.org/jira/browse/HBASE-28459 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Assignee: Sravi Kommineni >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > In hadoop3 there's the s3 magic committer which can speed up s3 writes > dramatically. In HFileOutputFormat2.createRecordWriter we cast the passed in > committer as a FileOutputCommitter. This causes a class cast exception when > the s3 magic committer is enabled: > > {code:java} > Error: java.lang.ClassCastException: class > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter cannot be cast to > class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter {code} > > We can cast to PathOutputCommitter instead, but its only available in > hadoop3+. So we will need to use reflection to work around this in branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28293) Add metric for GetClusterStatus request count.
[ https://issues.apache.org/jira/browse/HBASE-28293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ranganath Govardhanagiri reassigned HBASE-28293: Assignee: (was: Ranganath Govardhanagiri) > Add metric for GetClusterStatus request count. > -- > > Key: HBASE-28293 > URL: https://issues.apache.org/jira/browse/HBASE-28293 > Project: HBase > Issue Type: Bug >Reporter: Rushabh Shah >Priority: Major > > We have been bitten multiple times by GetClusterStatus request overwhelming > HMaster's memory usage. It would be good to add a metric for the total > GetClusterStatus requests count. > In almost all of our production incidents involving GetClusterStatus request, > HMaster were running out of memory with many clients call this RPC in > parallel and the response size is very big. > In hbase2 we have > [ClusterMetrics.Option|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetrics.java#L164-L224] > which can reduce the size of the response. > It would be nice to add another metric to indicate if the response size of > GetClusterStatus is greater than some threshold (like 5MB) -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28570 Remove deprecated fields in HBTU [hbase]
Apache-HBase commented on PR #5877: URL: https://github.com/apache/hbase/pull/5877#issuecomment-2098281655 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 0s | master passed | | +1 :green_heart: | compile | 3m 8s | master passed | | +1 :green_heart: | checkstyle | 0m 45s | master passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 53s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 46s | the patch passed | | +1 :green_heart: | compile | 3m 0s | the patch passed | | +1 :green_heart: | javac | 3m 0s | the patch passed | | +1 :green_heart: | checkstyle | 0m 43s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 5m 15s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 42s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 2m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 17s | The patch does not generate ASF License warnings. | | | | 31m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5877 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 868ce3f661d9 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 156e430dc5 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 82 (vs. ulimit of 3) | | modules | C: hbase-server hbase-testing-util U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5877/2/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-28564) Refactor direct interactions of Reference file creations to SFT interface
[ https://issues.apache.org/jira/browse/HBASE-28564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prathyusha reassigned HBASE-28564: -- Assignee: Prathyusha > Refactor direct interactions of Reference file creations to SFT interface > - > > Key: HBASE-28564 > URL: https://issues.apache.org/jira/browse/HBASE-28564 > Project: HBase > Issue Type: Improvement >Reporter: Prathyusha >Assignee: Prathyusha >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28573) Update compatibility report generator to ignore o.a.h.hbase.shaded packages
[ https://issues.apache.org/jira/browse/HBASE-28573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844249#comment-17844249 ] Duo Zhang commented on HBASE-28573: --- Big +1 > Update compatibility report generator to ignore o.a.h.hbase.shaded packages > --- > > Key: HBASE-28573 > URL: https://issues.apache.org/jira/browse/HBASE-28573 > Project: HBase > Issue Type: Task > Components: community >Reporter: Nick Dimiduk >Priority: Major > > This is a small change that will make reviewing release candidates a little > easier. Right now that compatibility report includes classes that we shade. > So when we shaded upgrade 3rd party dependencies, they show up in this report > as an incompatible change. Changes to these classes do not affect users so > there's no reason to consider them wrt compatibility. We should update the > reporting tool to exclude this package. > For example, > https://dist.apache.org/repos/dist/dev/hbase/2.6.0RC4/api_compare_2.5.0_to_2.6.0RC4.html#Binary_Removed -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28573) Update compatibility report generator to ignore o.a.h.hbase.shaded packages
Nick Dimiduk created HBASE-28573: Summary: Update compatibility report generator to ignore o.a.h.hbase.shaded packages Key: HBASE-28573 URL: https://issues.apache.org/jira/browse/HBASE-28573 Project: HBase Issue Type: Task Components: community Reporter: Nick Dimiduk This is a small change that will make reviewing release candidates a little easier. Right now that compatibility report includes classes that we shade. So when we shaded upgrade 3rd party dependencies, they show up in this report as an incompatible change. Changes to these classes do not affect users so there's no reason to consider them wrt compatibility. We should update the reporting tool to exclude this package. For example, https://dist.apache.org/repos/dist/dev/hbase/2.6.0RC4/api_compare_2.5.0_to_2.6.0RC4.html#Binary_Removed -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28561) Add separate fields for column family and qualifier in REST message formats
[ https://issues.apache.org/jira/browse/HBASE-28561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28561: Summary: Add separate fields for column family and qualifier in REST message formats (was: Add separate fields for column family and qualifier in REST message format) > Add separate fields for column family and qualifier in REST message formats > --- > > Key: HBASE-28561 > URL: https://issues.apache.org/jira/browse/HBASE-28561 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > The current format uses the archaic column field, which requires extra > processing and copying to encode/decode the CF and CQ at both the server and > client side. > We need to: > - Add a version field to the requests, to be enabled by clients that support > the new format > - Add the new fields to the JSON, XML and protobuf formats, and logic to use > them. > This should be doable in a backwards-compatible manner, with the server > falling back to the old format if it receives an unversioned request. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28469: Integration of time-based priority caching into compaction paths [hbase]
jhungund commented on code in PR #5866: URL: https://github.com/apache/hbase/pull/5866#discussion_r1591912584 ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: ## @@ -555,16 +563,33 @@ private void writeInlineBlocks(boolean closing) throws IOException { private void doCacheOnWrite(long offset) { cacheConf.getBlockCache().ifPresent(cache -> { HFileBlock cacheFormatBlock = blockWriter.getBlockForCaching(cacheConf); + BlockCacheKey key = buildBlockCacheKey(offset, cacheFormatBlock); + if (!shouldCacheBlock(cache, key)) { Review Comment: I see that shouldCacheBlock internally makes a call to isHotData (key), which would go all the way to fetch the file metadata and decide if the file is hot or not. Would this be called per block? If so, would it impact the performance adversely? Can we somehow restrict these traversals to once per file instead of once per block. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28469: Integration of time-based priority caching into compaction paths [hbase]
jhungund commented on code in PR #5866: URL: https://github.com/apache/hbase/pull/5866#discussion_r1591912584 ## hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: ## @@ -555,16 +563,33 @@ private void writeInlineBlocks(boolean closing) throws IOException { private void doCacheOnWrite(long offset) { cacheConf.getBlockCache().ifPresent(cache -> { HFileBlock cacheFormatBlock = blockWriter.getBlockForCaching(cacheConf); + BlockCacheKey key = buildBlockCacheKey(offset, cacheFormatBlock); + if (!shouldCacheBlock(cache, key)) { Review Comment: I see that shouldCacheBlock internally makes a call to isHotData (key), which would go all the way to fetch the file metadata and decide if the file is hot or not. Would this be called per block? If so, would it impact the performance adversely? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-28556) Reduce memory copying in Rest server when serializing CellModel to Protobuf
[ https://issues.apache.org/jira/browse/HBASE-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved HBASE-28556. - Fix Version/s: 2.4.18 3.0.0 2.7.0 2.6.1 2.5.9 Resolution: Fixed Committed to all active branches. Thanks for the review [~zhangduo]. > Reduce memory copying in Rest server when serializing CellModel to Protobuf > --- > > Key: HBASE-28556 > URL: https://issues.apache.org/jira/browse/HBASE-28556 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > Fix For: 2.4.18, 3.0.0, 2.7.0, 2.6.1, 2.5.9 > > > The REST server does a lot of unneccessary coping, which could be avoided at > least for protobuf encoding. > - -It uses ByteStringer to handle ByteBuffer backed Cells. However, it uses > the client API, so it should never encounter ByteBuffer backed cells.- > - It clones everything from the cells (sometimes multiple times) before > serializing to protbuf. > We could mimic the structure in Cell, with array, offset and length for each > field, in CellModel and use the appropriate protobuf setters to avoid the > extra copies. > There may or may not be a way to do the same for JSON and XML via jax-rs, I > don't know the frameworks well enough to tell, but if not, we could just do > the copying in the getters for them, which would not make things worse. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28459) HFileOutputFormat2 ClassCastException with s3 magic committer
[ https://issues.apache.org/jira/browse/HBASE-28459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844154#comment-17844154 ] Hudson commented on HBASE-28459: Results for branch branch-2 [build #1049 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HFileOutputFormat2 ClassCastException with s3 magic committer > - > > Key: HBASE-28459 > URL: https://issues.apache.org/jira/browse/HBASE-28459 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > In hadoop3 there's the s3 magic committer which can speed up s3 writes > dramatically. In HFileOutputFormat2.createRecordWriter we cast the passed in > committer as a FileOutputCommitter. This causes a class cast exception when > the s3 magic committer is enabled: > > {code:java} > Error: java.lang.ClassCastException: class > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter cannot be cast to > class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter {code} > > We can cast to PathOutputCommitter instead, but its only available in > hadoop3+. So we will need to use reflection to work around this in branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28567) Race condition causes MetaRegionLocationCache to never set watcher to populate meta location
[ https://issues.apache.org/jira/browse/HBASE-28567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844153#comment-17844153 ] Hudson commented on HBASE-28567: Results for branch branch-2 [build #1049 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1049/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Race condition causes MetaRegionLocationCache to never set watcher to > populate meta location > > > Key: HBASE-28567 > URL: https://issues.apache.org/jira/browse/HBASE-28567 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0, 2.5.8 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Major > Labels: pull-request-available > Fix For: 2.4.18, 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > {{ZKWatcher#getMetaReplicaNodesAndWatchChildren()}} attempts to set a a watch > on the base /hbase znode children using > {{ZKUtil.listChildrenAndWatchForNewChildren()}}, but if the node does not > exist, no watch gets set. > We've seen this in the test container Trino uses over at > [trino/21569|https://github.com/trinodb/trino/pull/21569] , where ZK, master, > and RS are all run in the same container. > The fix is to throw if the node does not exist so that > {{MetaRegionLocationCache}} can retry until the node gets created. -- This message was sent by Atlassian Jira (v8.20.10#820010)