[jira] [Commented] (HBASE-28588) Remove deprecated methods in WAL
[ https://issues.apache.org/jira/browse/HBASE-28588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850499#comment-17850499 ] Hudson commented on HBASE-28588: Results for branch master [build #1083 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove deprecated methods in WAL > > > Key: HBASE-28588 > URL: https://issues.apache.org/jira/browse/HBASE-28588 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Duo Zhang >Assignee: Liangjun He >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850500#comment-17850500 ] Hudson commented on HBASE-28616: Results for branch master [build #1083 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1083/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28588) Remove deprecated methods in WAL
[ https://issues.apache.org/jira/browse/HBASE-28588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850496#comment-17850496 ] Hudson commented on HBASE-28588: Results for branch branch-3 [build #215 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove deprecated methods in WAL > > > Key: HBASE-28588 > URL: https://issues.apache.org/jira/browse/HBASE-28588 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Duo Zhang >Assignee: Liangjun He >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850497#comment-17850497 ] Hudson commented on HBASE-28616: Results for branch branch-3 [build #215 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/215/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850485#comment-17850485 ] Hudson commented on HBASE-28616: Results for branch branch-2.5 [build #536 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850484#comment-17850484 ] Hudson commented on HBASE-28613: Results for branch branch-2.5 [build #536 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/536/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28582) ModifyTableProcedure should not reset TRSP on region node when closing unused region replicas
[ https://issues.apache.org/jira/browse/HBASE-28582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28582. --- Fix Version/s: 2.7.0 3.0.0-beta-2 2.6.1 2.5.9 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.5+. Thanks [~vjasani] for reviewing! > ModifyTableProcedure should not reset TRSP on region node when closing unused > region replicas > - > > Key: HBASE-28582 > URL: https://issues.apache.org/jira/browse/HBASE-28582 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > Found this when digging HBASE-28522. > First, this is not safe as MTP does not like DTP where we hold the exclusive > lock all the time. > Second, even if we hold the exclusive lock all the time, as showed in > HBASE-28522, we may still hang there forever because SCP will not interrupt > the TRSP. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28626) MultiRowRangeFilter deserialization fails in org.apache.hadoop.hbase.rest.model.ScannerModel
[ https://issues.apache.org/jira/browse/HBASE-28626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28626: Status: Patch Available (was: Open) > MultiRowRangeFilter deserialization fails in > org.apache.hadoop.hbase.rest.model.ScannerModel > > > Key: HBASE-28626 > URL: https://issues.apache.org/jira/browse/HBASE-28626 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > > org.apache.hadoop.hbase.filter.MultiRowRangeFilter.BasicRowRange has several > getters that have no corresponing setters. > jackson serializes the pseudo-getters' values, but when it tries to > deserialize, there are no corresponding setters and it errors out. > {noformat} > com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: > Unrecognized field "ascendingOrder" (class > org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange), not marked as > ignorable (4 known properties: "startRow", "startRowInclusive", "stopRow", > "stopRowInclusive"]) > at [Source: > (String)"{"type":"FilterList","op":"MUST_PASS_ALL","comparator":null,"value":null,"filters":[{"type":"MultiRowRangeFilter","op":null,"comparator":null,"value":null,"filters":null,"limit":null,"offset":null,"family":null,"qualifier":null,"ifMissing":null,"latestVersion":null,"minColumn":null,"minColumnInclusive":null,"maxColumn":null,"maxColumnInclusive":null,"dropDependentColumn":null,"chance":null,"prefixes":null,"ranges":[{"startRow":"MQ==","startRowInclusive":true,"stopRow":"MQ==","stopRowInclusive":t"[truncated > 553 chars]; line: 1, column: 526] (through reference chain: > org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["filters"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["ranges"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange["ascendingOrder"]) > at > com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61) > at > com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:1127) > at > com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:2036) > at > com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1700) > at > com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1678) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:320) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) > at > com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) > at > com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:3
[jira] [Updated] (HBASE-28626) MultiRowRangeFilter deserialization fails in org.apache.hadoop.hbase.rest.model.ScannerModel
[ https://issues.apache.org/jira/browse/HBASE-28626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28626: --- Labels: pull-request-available (was: ) > MultiRowRangeFilter deserialization fails in > org.apache.hadoop.hbase.rest.model.ScannerModel > > > Key: HBASE-28626 > URL: https://issues.apache.org/jira/browse/HBASE-28626 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > > org.apache.hadoop.hbase.filter.MultiRowRangeFilter.BasicRowRange has several > getters that have no corresponing setters. > jackson serializes the pseudo-getters' values, but when it tries to > deserialize, there are no corresponding setters and it errors out. > {noformat} > com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: > Unrecognized field "ascendingOrder" (class > org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange), not marked as > ignorable (4 known properties: "startRow", "startRowInclusive", "stopRow", > "stopRowInclusive"]) > at [Source: > (String)"{"type":"FilterList","op":"MUST_PASS_ALL","comparator":null,"value":null,"filters":[{"type":"MultiRowRangeFilter","op":null,"comparator":null,"value":null,"filters":null,"limit":null,"offset":null,"family":null,"qualifier":null,"ifMissing":null,"latestVersion":null,"minColumn":null,"minColumnInclusive":null,"maxColumn":null,"maxColumnInclusive":null,"dropDependentColumn":null,"chance":null,"prefixes":null,"ranges":[{"startRow":"MQ==","startRowInclusive":true,"stopRow":"MQ==","stopRowInclusive":t"[truncated > 553 chars]; line: 1, column: 526] (through reference chain: > org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["filters"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["ranges"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange["ascendingOrder"]) > at > com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61) > at > com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:1127) > at > com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:2036) > at > com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1700) > at > com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1678) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:320) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) > at > com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) > at > com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) > at > com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) > at > com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(B
[jira] [Updated] (HBASE-28627) REST ScannerModel doesn't support includeStartRow/includeStopRow
[ https://issues.apache.org/jira/browse/HBASE-28627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28627: Description: includeStartRow/includeStopRow should be transparently supported. The current behaviour is limited and confusing, as the user would rightly expect this to work via the REST interface. The only problem is that adding them may break backwards compatibility. Need to test if the XML unmarshaller can handle nonexistent fields. was: tincludeStartRow/includeStopRow should be transparently supported. The current behaviour is limited and confiusing. The only problem is that adding them may break backwards compatibility. Need to test if the XML unmarshaller can handle nonexistent fields. > REST ScannerModel doesn't support includeStartRow/includeStopRow > > > Key: HBASE-28627 > URL: https://issues.apache.org/jira/browse/HBASE-28627 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Priority: Major > > includeStartRow/includeStopRow should be transparently supported. > The current behaviour is limited and confusing, as the user would rightly > expect this to work via the REST interface. > The only problem is that adding them may break backwards compatibility. > Need to test if the XML unmarshaller can handle nonexistent fields. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28627) REST ScannerModel doesn't support includeStartRow/includeStopRow
[ https://issues.apache.org/jira/browse/HBASE-28627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28627: Description: tincludeStartRow/includeStopRow should be transparently supported. The current behaviour is limited and confiusing. The only problem is that adding them may break backwards compatibility. Need to test if the XML unmarshaller can handle nonexistent fields. > REST ScannerModel doesn't support includeStartRow/includeStopRow > > > Key: HBASE-28627 > URL: https://issues.apache.org/jira/browse/HBASE-28627 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Priority: Major > > tincludeStartRow/includeStopRow should be transparently supported. > The current behaviour is limited and confiusing. > The only problem is that adding them may break backwards compatibility. > Need to test if the XML unmarshaller can handle nonexistent fields. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28627) REST ScannerModel doesn't support includeStartRow/includeStopRow
[ https://issues.apache.org/jira/browse/HBASE-28627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28627: Environment: (was: includeStartRow/includeStopRow should be transparently supported. The current behaviour is limited and confiusing. The only problem is that adding them may break backwards compatibility. Need to test if the XML unmarshaller can handle nonexistent fields. ) > REST ScannerModel doesn't support includeStartRow/includeStopRow > > > Key: HBASE-28627 > URL: https://issues.apache.org/jira/browse/HBASE-28627 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28627) REST ScannerModel doesn't support includeStartRow/includeStopRow
Istvan Toth created HBASE-28627: --- Summary: REST ScannerModel doesn't support includeStartRow/includeStopRow Key: HBASE-28627 URL: https://issues.apache.org/jira/browse/HBASE-28627 Project: HBase Issue Type: Bug Components: REST Environment: includeStartRow/includeStopRow should be transparently supported. The current behaviour is limited and confiusing. The only problem is that adding them may break backwards compatibility. Need to test if the XML unmarshaller can handle nonexistent fields. Reporter: Istvan Toth -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved HBASE-28623. - Resolution: Won't Fix > Scan with MultiRowRangeFilter very slow > --- > > Key: HBASE-28623 > URL: https://issues.apache.org/jira/browse/HBASE-28623 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.4.14 >Reporter: chaijunjie >Priority: Major > > when scan a big table({*}more than 500 regions{*}) with > {*}MultiRowRangeFilter{*}, it is very slow... > it seems to {*}scan all regions{*}... > for example: > we scan 3 ranges.. > startRow: 097_28220_ stopRow: 097_28220_~ > startRow: 098_28221_ stopRow: 098_28221_~ > startRow: 099_28222_ stopRow: 099_28222_~ > and enable TRACE log in hbase client > we find there are too many scans > {code:java} > 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => > '000_2147757104_4641'} > 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => > '000_2147757104_4641', ENDKEY => '000_21 > 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => > '000_2148042968_3081', ENDKEY => '000_21 > 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => > '000_2148518165_26648', ENDKEY => '000_3 > 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => > '000_389786_4001', ENDKEY => '000_434112 > 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => > '000_434112_88683', ENDKEY => '000~'} > 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => > '000~', ENDKEY => '001_2147735632_4395'} > 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => > '001_2147735632_4395', ENDKEY => '001_21 > 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => > '001_2148043057_5975', ENDKEY => '001_23 > 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => > '001_238065_2439', ENDKEY => '001_400433 > 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => > '001_400433_45599', ENDKEY => '001_43429 > 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => > '001_434296_34588', ENDKEY => '001~'} > 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => > '001~', ENDKEY => '002_2147741550_785'} > 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => > '002_2147741550_785', ENDKEY => '002_214 > 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => > '002_2148386148_27094', ENDKEY => '002_4 > 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => > '002_400185_74884', ENDKEY => '002_45861 > 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => > '002_458618_25467', ENDKEY => '002~'} > 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => > '002~', ENDKEY => '003_2147739809_4985'} > 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => > '003_2147739809_4985', ENDKEY => '003_21 > 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => > '003_2148024128_3054', ENDKEY => '003_21 > 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => > '003_2148386097_25973', ENDKEY => '003_3 > 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => > '003_396959_86147', ENDKEY => '003_45861 > 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => > '003_458619_61964', ENDKEY => '003~'} > 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => > '003~', ENDKEY => '004_2147804164_6378'} > 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => > '004_2147804164_6378', ENDKEY => '004_21 > 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => > '004_2148363241_1674', ENDKEY => '004_40 > 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => > '004_400633_98138', ENDKEY => '004_45953 > 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => > '004_459534_8710', ENDKEY => '004~'} > 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => > '004~', ENDKEY => '005_2147868266_5368'} > 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => > '005_2147868266_5368', ENDKEY => '005_21 > 行 586:
[jira] [Commented] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850373#comment-17850373 ] Istvan Toth commented on HBASE-28623: - Unfortunately, the HBase API does not make this possible. There is no way to influence which regions a Filter will be sent to from within the filter. If the ranges are close, you can set the start and stop rows of the scan manually to the minimum start and maxium end rows of your region. > Scan with MultiRowRangeFilter very slow > --- > > Key: HBASE-28623 > URL: https://issues.apache.org/jira/browse/HBASE-28623 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.4.14 >Reporter: chaijunjie >Priority: Major > > when scan a big table({*}more than 500 regions{*}) with > {*}MultiRowRangeFilter{*}, it is very slow... > it seems to {*}scan all regions{*}... > for example: > we scan 3 ranges.. > startRow: 097_28220_ stopRow: 097_28220_~ > startRow: 098_28221_ stopRow: 098_28221_~ > startRow: 099_28222_ stopRow: 099_28222_~ > and enable TRACE log in hbase client > we find there are too many scans > {code:java} > 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => > '000_2147757104_4641'} > 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => > '000_2147757104_4641', ENDKEY => '000_21 > 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => > '000_2148042968_3081', ENDKEY => '000_21 > 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => > '000_2148518165_26648', ENDKEY => '000_3 > 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => > '000_389786_4001', ENDKEY => '000_434112 > 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => > '000_434112_88683', ENDKEY => '000~'} > 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => > '000~', ENDKEY => '001_2147735632_4395'} > 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => > '001_2147735632_4395', ENDKEY => '001_21 > 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => > '001_2148043057_5975', ENDKEY => '001_23 > 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => > '001_238065_2439', ENDKEY => '001_400433 > 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => > '001_400433_45599', ENDKEY => '001_43429 > 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => > '001_434296_34588', ENDKEY => '001~'} > 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => > '001~', ENDKEY => '002_2147741550_785'} > 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => > '002_2147741550_785', ENDKEY => '002_214 > 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => > '002_2148386148_27094', ENDKEY => '002_4 > 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => > '002_400185_74884', ENDKEY => '002_45861 > 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => > '002_458618_25467', ENDKEY => '002~'} > 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => > '002~', ENDKEY => '003_2147739809_4985'} > 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => > '003_2147739809_4985', ENDKEY => '003_21 > 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => > '003_2148024128_3054', ENDKEY => '003_21 > 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => > '003_2148386097_25973', ENDKEY => '003_3 > 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => > '003_396959_86147', ENDKEY => '003_45861 > 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => > '003_458619_61964', ENDKEY => '003~'} > 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => > '003~', ENDKEY => '004_2147804164_6378'} > 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => > '004_2147804164_6378', ENDKEY => '004_21 > 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => > '004_2148363241_1674', ENDKEY => '004_40 > 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => > '004_400633_98138', ENDKEY => '004_45953 > 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => > '004_459534_8710',
[jira] [Created] (HBASE-28626) MultiRowRangeFilter deserialization fails in org.apache.hadoop.hbase.rest.model.ScannerModel
Istvan Toth created HBASE-28626: --- Summary: MultiRowRangeFilter deserialization fails in org.apache.hadoop.hbase.rest.model.ScannerModel Key: HBASE-28626 URL: https://issues.apache.org/jira/browse/HBASE-28626 Project: HBase Issue Type: Bug Components: REST Reporter: Istvan Toth Assignee: Istvan Toth org.apache.hadoop.hbase.filter.MultiRowRangeFilter.BasicRowRange has several getters that have no corresponing setters. jackson serializes the pseudo-getters' values, but when it tries to deserialize, there are no corresponding setters and it errors out. {noformat} com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "ascendingOrder" (class org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange), not marked as ignorable (4 known properties: "startRow", "startRowInclusive", "stopRow", "stopRowInclusive"]) at [Source: (String)"{"type":"FilterList","op":"MUST_PASS_ALL","comparator":null,"value":null,"filters":[{"type":"MultiRowRangeFilter","op":null,"comparator":null,"value":null,"filters":null,"limit":null,"offset":null,"family":null,"qualifier":null,"ifMissing":null,"latestVersion":null,"minColumn":null,"minColumnInclusive":null,"maxColumn":null,"maxColumnInclusive":null,"dropDependentColumn":null,"chance":null,"prefixes":null,"ranges":[{"startRow":"MQ==","startRowInclusive":true,"stopRow":"MQ==","stopRowInclusive":t"[truncated 553 chars]; line: 1, column: 526] (through reference chain: org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["filters"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.rest.model.ScannerModel$FilterModel["ranges"]->java.util.ArrayList[0]->org.apache.hadoop.hbase.filter.MultiRowRangeFilter$RowRange["ascendingOrder"]) at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61) at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:1127) at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:2036) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1700) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1678) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:320) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28) at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177) at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:323) at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3629) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3597) at org.apache.hadoop.hbase.rest.model.ScannerModel.
[jira] [Updated] (HBASE-28625) ExportSnapshot should verify checksums for the source file and the target file
[ https://issues.apache.org/jira/browse/HBASE-28625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28625: --- Labels: pull-request-available (was: ) > ExportSnapshot should verify checksums for the source file and the target file > -- > > Key: HBASE-28625 > URL: https://issues.apache.org/jira/browse/HBASE-28625 > Project: HBase > Issue Type: Improvement >Reporter: Liangjun He >Assignee: Liangjun He >Priority: Major > Labels: pull-request-available > > In our cluster, we encountered cases where the target hfile was corrupted > after executing ExportSnapshot. [HBASE-13588 > |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data > transferred, but cannot solve our problem. Therefore, we believe it is > necessary to verify checksums on the files exported by ExportSnapshot. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28616: -- Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to all active branches. Thanks [~apurtell] and [~PankajKumar] for reviewing! > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28616: -- Release Note: Mark these two fileds in TableOutputFormat as deprecated as they do not take effect any more. REGION_SERVER_CLASS REGION_SERVER_IMPL Mark these two methods in TableMapReduceUtil as deprecated as the serverClass and serverImpl parameters do not take effect any more. void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl) throws IOException void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl, boolean addDependencyJars) throws IOException > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28616: -- Release Note: Mark these two fileds in TableOutputFormat as deprecated as they do not take effect any more. REGION_SERVER_CLASS REGION_SERVER_IMPL Mark these two methods in TableMapReduceUtil as deprecated as the serverClass and serverImpl parameters do not take effect any more. void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl) throws IOException void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl, boolean addDependencyJars) throws IOException was: Mark these two fileds in TableOutputFormat as deprecated as they do not take effect any more. REGION_SERVER_CLASS REGION_SERVER_IMPL Mark these two methods in TableMapReduceUtil as deprecated as the serverClass and serverImpl parameters do not take effect any more. void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl) throws IOException void initTableReducerJob(String table, Class reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl, boolean addDependencyJars) throws IOException > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850267#comment-17850267 ] Hudson commented on HBASE-28613: Results for branch branch-2 [build #1064 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1064/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1064/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1064/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1064/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1064/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850245#comment-17850245 ] Hudson commented on HBASE-28613: Results for branch branch-2.6 [build #126 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/126/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/126/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/126/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/126/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/126/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28625) ExportSnapshot should verify checksums for the source file and the target file
[ https://issues.apache.org/jira/browse/HBASE-28625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liangjun He updated HBASE-28625: Description: In our cluster, we encountered cases where the target hfile was corrupted after executing ExportSnapshot. [HBASE-13588 |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data transferred, but cannot solve our problem. Therefore, we believe it is necessary to verify checksums on the files exported by ExportSnapshot. (was: In our cluster, we encountered cases where the target hfile was corrupted after executing ExportSnapshot. [HBASE-13588 |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data transferred, but cannot solve our problem. Therefore, we believe it is necessary to perform checksum verification on the files exported by ExportSnapshot.) > ExportSnapshot should verify checksums for the source file and the target file > -- > > Key: HBASE-28625 > URL: https://issues.apache.org/jira/browse/HBASE-28625 > Project: HBase > Issue Type: Improvement >Reporter: Liangjun He >Assignee: Liangjun He >Priority: Major > > In our cluster, we encountered cases where the target hfile was corrupted > after executing ExportSnapshot. [HBASE-13588 > |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data > transferred, but cannot solve our problem. Therefore, we believe it is > necessary to verify checksums on the files exported by ExportSnapshot. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28625) ExportSnapshot should verify checksums for the source file and the target file
[ https://issues.apache.org/jira/browse/HBASE-28625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liangjun He updated HBASE-28625: Description: In our cluster, we encountered cases where the target hfile was corrupted after executing ExportSnapshot. [HBASE-13588 |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data transferred, but cannot solve our problem. Therefore, we believe it is necessary to perform checksum verification on the files exported by ExportSnapshot. > ExportSnapshot should verify checksums for the source file and the target file > -- > > Key: HBASE-28625 > URL: https://issues.apache.org/jira/browse/HBASE-28625 > Project: HBase > Issue Type: Improvement >Reporter: Liangjun He >Assignee: Liangjun He >Priority: Major > > In our cluster, we encountered cases where the target hfile was corrupted > after executing ExportSnapshot. [HBASE-13588 > |https://issues.apache.org/jira/browse/HBASE-13588] can only checksum data > transferred, but cannot solve our problem. Therefore, we believe it is > necessary to perform checksum verification on the files exported by > ExportSnapshot. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28625) ExportSnapshot should verify checksums for the source file and the target file
Liangjun He created HBASE-28625: --- Summary: ExportSnapshot should verify checksums for the source file and the target file Key: HBASE-28625 URL: https://issues.apache.org/jira/browse/HBASE-28625 Project: HBase Issue Type: Improvement Reporter: Liangjun He Assignee: Liangjun He -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850167#comment-17850167 ] Hudson commented on HBASE-28613: Results for branch branch-3 [build #214 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/214/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/214/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/214/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/214/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850168#comment-17850168 ] Hudson commented on HBASE-28613: Results for branch master [build #1082 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1082/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1082/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1082/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1082/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28624) Docs around configuring backups can lead to unexpectedly disabling other features
Bryan Beaudreault created HBASE-28624: - Summary: Docs around configuring backups can lead to unexpectedly disabling other features Key: HBASE-28624 URL: https://issues.apache.org/jira/browse/HBASE-28624 Project: HBase Issue Type: Bug Reporter: Bryan Beaudreault In our documentation for enabling backups, we suggest that the user set the following: {code:java} hbase.master.logcleaner.plugins org.apache.hadoop.hbase.backup.master.BackupLogCleaner,... hbase.master.hfilecleaner.plugins org.apache.hadoop.hbase.backup.BackupHFileCleaner,... {code} A naive user will set these and not know what to do about the ",..." part. In doing so, they will unexpectedly be disabling all of the default cleaners we have. For example here are the defaults: {code:java} hbase.master.logcleaner.plugins org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner hbase.master.hfilecleaner.plugins org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner {code} So basically disabling support for hbase.master.logcleaner.ttl and hbase.master.hfilecleaner.ttl. There exists a method BackupManager.decorateMasterConfiguration and BackupManager.decorateRegionServerConfiguration. They are currently javadoc'd as being for tests only, but I think we should call these in HMaster and HRegionServer. Then we can only require the user to set "hbase.backup.enable" and very much simplify our docs here. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28624) Docs around configuring backups can lead to unexpectedly disabling other features
[ https://issues.apache.org/jira/browse/HBASE-28624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Beaudreault updated HBASE-28624: -- Issue Type: Improvement (was: Bug) > Docs around configuring backups can lead to unexpectedly disabling other > features > - > > Key: HBASE-28624 > URL: https://issues.apache.org/jira/browse/HBASE-28624 > Project: HBase > Issue Type: Improvement >Reporter: Bryan Beaudreault >Priority: Major > > In our documentation for enabling backups, we suggest that the user set the > following: > {code:java} > > hbase.master.logcleaner.plugins > org.apache.hadoop.hbase.backup.master.BackupLogCleaner,... > > > hbase.master.hfilecleaner.plugins > org.apache.hadoop.hbase.backup.BackupHFileCleaner,... > {code} > A naive user will set these and not know what to do about the ",..." part. In > doing so, they will unexpectedly be disabling all of the default cleaners we > have. For example here are the defaults: > {code:java} > > hbase.master.logcleaner.plugins > > org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner > > > hbase.master.hfilecleaner.plugins > > org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner > {code} > So basically disabling support for hbase.master.logcleaner.ttl and > hbase.master.hfilecleaner.ttl. > There exists a method BackupManager.decorateMasterConfiguration and > BackupManager.decorateRegionServerConfiguration. They are currently javadoc'd > as being for tests only, but I think we should call these in HMaster and > HRegionServer. Then we can only require the user to set "hbase.backup.enable" > and very much simplify our docs here. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28624) Docs around configuring backups can lead to unexpectedly disabling other features
[ https://issues.apache.org/jira/browse/HBASE-28624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Beaudreault updated HBASE-28624: -- Component/s: backup > Docs around configuring backups can lead to unexpectedly disabling other > features > - > > Key: HBASE-28624 > URL: https://issues.apache.org/jira/browse/HBASE-28624 > Project: HBase > Issue Type: Improvement > Components: backuprestore >Reporter: Bryan Beaudreault >Priority: Major > > In our documentation for enabling backups, we suggest that the user set the > following: > {code:java} > > hbase.master.logcleaner.plugins > org.apache.hadoop.hbase.backup.master.BackupLogCleaner,... > > > hbase.master.hfilecleaner.plugins > org.apache.hadoop.hbase.backup.BackupHFileCleaner,... > {code} > A naive user will set these and not know what to do about the ",..." part. In > doing so, they will unexpectedly be disabling all of the default cleaners we > have. For example here are the defaults: > {code:java} > > hbase.master.logcleaner.plugins > > org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner > > > hbase.master.hfilecleaner.plugins > > org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner > {code} > So basically disabling support for hbase.master.logcleaner.ttl and > hbase.master.hfilecleaner.ttl. > There exists a method BackupManager.decorateMasterConfiguration and > BackupManager.decorateRegionServerConfiguration. They are currently javadoc'd > as being for tests only, but I think we should call these in HMaster and > HRegionServer. Then we can only require the user to set "hbase.backup.enable" > and very much simplify our docs here. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28588) Remove deprecated methods in WAL
[ https://issues.apache.org/jira/browse/HBASE-28588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28588. --- Fix Version/s: 3.0.0-beta-2 Hadoop Flags: Reviewed Resolution: Fixed Pushed to master and branch-3. THanks [~heliangjun] for contributing! > Remove deprecated methods in WAL > > > Key: HBASE-28588 > URL: https://issues.apache.org/jira/browse/HBASE-28588 > Project: HBase > Issue Type: Sub-task > Components: wal >Reporter: Duo Zhang >Assignee: Liangjun He >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie updated HBASE-28623: --- Description: when scan a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find the too many scans {code:java} 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => '000_2147757104_4641'} 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => '000_2147757104_4641', ENDKEY => '000_21 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => '000_2148042968_3081', ENDKEY => '000_21 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => '000_2148518165_26648', ENDKEY => '000_3 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => '000_389786_4001', ENDKEY => '000_434112 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => '000_434112_88683', ENDKEY => '000~'} 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => '000~', ENDKEY => '001_2147735632_4395'} 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => '001_2147735632_4395', ENDKEY => '001_21 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => '001_2148043057_5975', ENDKEY => '001_23 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => '001_238065_2439', ENDKEY => '001_400433 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => '001_400433_45599', ENDKEY => '001_43429 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => '001_434296_34588', ENDKEY => '001~'} 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => '001~', ENDKEY => '002_2147741550_785'} 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => '002_2147741550_785', ENDKEY => '002_214 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => '002_2148386148_27094', ENDKEY => '002_4 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => '002_400185_74884', ENDKEY => '002_45861 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => '002_458618_25467', ENDKEY => '002~'} 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => '002~', ENDKEY => '003_2147739809_4985'} 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => '003_2147739809_4985', ENDKEY => '003_21 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => '003_2148024128_3054', ENDKEY => '003_21 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => '003_2148386097_25973', ENDKEY => '003_3 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => '003_396959_86147', ENDKEY => '003_45861 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => '003_458619_61964', ENDKEY => '003~'} 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => '003~', ENDKEY => '004_2147804164_6378'} 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => '004_2147804164_6378', ENDKEY => '004_21 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => '004_2148363241_1674', ENDKEY => '004_40 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => '004_400633_98138', ENDKEY => '004_45953 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => '004_459534_8710', ENDKEY => '004~'} 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => '004~', ENDKEY => '005_2147868266_5368'} 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => '005_2147868266_5368', ENDKEY => '005_21 行 586: 1714364810854.f11c9bba0679a71b0b2893a44a8e188b.', STARTKEY => '005_2148453550_2383', ENDKEY => '005_40 行 600: 1715226125040.c01b757b47a6b845d2db236b31077995.', STARTKEY => '005_400582_60878', ENDKEY => '005_45853 行 614: 1715226125040.9e721738a1a570bc5c0b6517af0e8dec.', STARTKEY => '005_458531_5320', ENDKEY => '005~'} 行 628: 1713940302372.b87acbb397753e23c469e70abe4fa9f9.', STARTKEY => '005~', ENDKEY => '006_2147745442_5063'} 行 642: 1714587432405.3f8e44ed581d262a1b259db2ff63318a.', STARTKEY => '006_2147745442_5063', ENDKEY => '006_21 行 656: 1714587432405.1f97f75c0fed08f834fb93b13e7aa811.', STARTKEY => '006_2148337090_6508', ENDKEY => '006_40 行 681: 1716332054802.c3b7bfa948f16dff94b3dedc1ec7f50d.', STARTKE
[jira] [Updated] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie updated HBASE-28623: --- Description: when scan a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find there are too many scans {code:java} 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => '000_2147757104_4641'} 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => '000_2147757104_4641', ENDKEY => '000_21 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => '000_2148042968_3081', ENDKEY => '000_21 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => '000_2148518165_26648', ENDKEY => '000_3 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => '000_389786_4001', ENDKEY => '000_434112 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => '000_434112_88683', ENDKEY => '000~'} 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => '000~', ENDKEY => '001_2147735632_4395'} 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => '001_2147735632_4395', ENDKEY => '001_21 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => '001_2148043057_5975', ENDKEY => '001_23 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => '001_238065_2439', ENDKEY => '001_400433 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => '001_400433_45599', ENDKEY => '001_43429 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => '001_434296_34588', ENDKEY => '001~'} 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => '001~', ENDKEY => '002_2147741550_785'} 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => '002_2147741550_785', ENDKEY => '002_214 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => '002_2148386148_27094', ENDKEY => '002_4 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => '002_400185_74884', ENDKEY => '002_45861 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => '002_458618_25467', ENDKEY => '002~'} 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => '002~', ENDKEY => '003_2147739809_4985'} 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => '003_2147739809_4985', ENDKEY => '003_21 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => '003_2148024128_3054', ENDKEY => '003_21 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => '003_2148386097_25973', ENDKEY => '003_3 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => '003_396959_86147', ENDKEY => '003_45861 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => '003_458619_61964', ENDKEY => '003~'} 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => '003~', ENDKEY => '004_2147804164_6378'} 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => '004_2147804164_6378', ENDKEY => '004_21 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => '004_2148363241_1674', ENDKEY => '004_40 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => '004_400633_98138', ENDKEY => '004_45953 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => '004_459534_8710', ENDKEY => '004~'} 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => '004~', ENDKEY => '005_2147868266_5368'} 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => '005_2147868266_5368', ENDKEY => '005_21 行 586: 1714364810854.f11c9bba0679a71b0b2893a44a8e188b.', STARTKEY => '005_2148453550_2383', ENDKEY => '005_40 行 600: 1715226125040.c01b757b47a6b845d2db236b31077995.', STARTKEY => '005_400582_60878', ENDKEY => '005_45853 行 614: 1715226125040.9e721738a1a570bc5c0b6517af0e8dec.', STARTKEY => '005_458531_5320', ENDKEY => '005~'} 行 628: 1713940302372.b87acbb397753e23c469e70abe4fa9f9.', STARTKEY => '005~', ENDKEY => '006_2147745442_5063'} 行 642: 1714587432405.3f8e44ed581d262a1b259db2ff63318a.', STARTKEY => '006_2147745442_5063', ENDKEY => '006_21 行 656: 1714587432405.1f97f75c0fed08f834fb93b13e7aa811.', STARTKEY => '006_2148337090_6508', ENDKEY => '006_40 行 681: 1716332054802.c3b7bfa948f16dff94b3dedc1ec7f50d.', STARTKE
[jira] [Updated] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie updated HBASE-28623: --- Description: when scan a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find the too many scans {code:java} 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => '000_2147757104_4641'} 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => '000_2147757104_4641', ENDKEY => '000_21 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => '000_2148042968_3081', ENDKEY => '000_21 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => '000_2148518165_26648', ENDKEY => '000_3 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => '000_389786_4001', ENDKEY => '000_434112 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => '000_434112_88683', ENDKEY => '000~'} 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => '000~', ENDKEY => '001_2147735632_4395'} 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => '001_2147735632_4395', ENDKEY => '001_21 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => '001_2148043057_5975', ENDKEY => '001_23 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => '001_238065_2439', ENDKEY => '001_400433 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => '001_400433_45599', ENDKEY => '001_43429 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => '001_434296_34588', ENDKEY => '001~'} 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => '001~', ENDKEY => '002_2147741550_785'} 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => '002_2147741550_785', ENDKEY => '002_214 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => '002_2148386148_27094', ENDKEY => '002_4 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => '002_400185_74884', ENDKEY => '002_45861 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => '002_458618_25467', ENDKEY => '002~'} 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => '002~', ENDKEY => '003_2147739809_4985'} 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => '003_2147739809_4985', ENDKEY => '003_21 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => '003_2148024128_3054', ENDKEY => '003_21 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => '003_2148386097_25973', ENDKEY => '003_3 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => '003_396959_86147', ENDKEY => '003_45861 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => '003_458619_61964', ENDKEY => '003~'} 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => '003~', ENDKEY => '004_2147804164_6378'} 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => '004_2147804164_6378', ENDKEY => '004_21 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => '004_2148363241_1674', ENDKEY => '004_40 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => '004_400633_98138', ENDKEY => '004_45953 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => '004_459534_8710', ENDKEY => '004~'} 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => '004~', ENDKEY => '005_2147868266_5368'} 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => '005_2147868266_5368', ENDKEY => '005_21 行 586: 1714364810854.f11c9bba0679a71b0b2893a44a8e188b.', STARTKEY => '005_2148453550_2383', ENDKEY => '005_40 行 600: 1715226125040.c01b757b47a6b845d2db236b31077995.', STARTKEY => '005_400582_60878', ENDKEY => '005_45853 行 614: 1715226125040.9e721738a1a570bc5c0b6517af0e8dec.', STARTKEY => '005_458531_5320', ENDKEY => '005~'} 行 628: 1713940302372.b87acbb397753e23c469e70abe4fa9f9.', STARTKEY => '005~', ENDKEY => '006_2147745442_5063'} 行 642: 1714587432405.3f8e44ed581d262a1b259db2ff63318a.', STARTKEY => '006_2147745442_5063', ENDKEY => '006_21 行 656: 1714587432405.1f97f75c0fed08f834fb93b13e7aa811.', STARTKEY => '006_2148337090_6508', ENDKEY => '006_40 行 681: 1716332054802.c3b7bfa948f16dff94b3dedc1ec7f50d.', STARTKE
[jira] [Updated] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie updated HBASE-28623: --- Description: when we *scan* a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find the too many scans {code:java} 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => '000_2147757104_4641'} 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => '000_2147757104_4641', ENDKEY => '000_21 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => '000_2148042968_3081', ENDKEY => '000_21 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => '000_2148518165_26648', ENDKEY => '000_3 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => '000_389786_4001', ENDKEY => '000_434112 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => '000_434112_88683', ENDKEY => '000~'} 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => '000~', ENDKEY => '001_2147735632_4395'} 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => '001_2147735632_4395', ENDKEY => '001_21 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => '001_2148043057_5975', ENDKEY => '001_23 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => '001_238065_2439', ENDKEY => '001_400433 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => '001_400433_45599', ENDKEY => '001_43429 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => '001_434296_34588', ENDKEY => '001~'} 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => '001~', ENDKEY => '002_2147741550_785'} 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => '002_2147741550_785', ENDKEY => '002_214 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => '002_2148386148_27094', ENDKEY => '002_4 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => '002_400185_74884', ENDKEY => '002_45861 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => '002_458618_25467', ENDKEY => '002~'} 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => '002~', ENDKEY => '003_2147739809_4985'} 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => '003_2147739809_4985', ENDKEY => '003_21 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => '003_2148024128_3054', ENDKEY => '003_21 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => '003_2148386097_25973', ENDKEY => '003_3 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => '003_396959_86147', ENDKEY => '003_45861 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => '003_458619_61964', ENDKEY => '003~'} 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => '003~', ENDKEY => '004_2147804164_6378'} 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => '004_2147804164_6378', ENDKEY => '004_21 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => '004_2148363241_1674', ENDKEY => '004_40 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => '004_400633_98138', ENDKEY => '004_45953 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => '004_459534_8710', ENDKEY => '004~'} 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => '004~', ENDKEY => '005_2147868266_5368'} 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => '005_2147868266_5368', ENDKEY => '005_21 行 586: 1714364810854.f11c9bba0679a71b0b2893a44a8e188b.', STARTKEY => '005_2148453550_2383', ENDKEY => '005_40 行 600: 1715226125040.c01b757b47a6b845d2db236b31077995.', STARTKEY => '005_400582_60878', ENDKEY => '005_45853 行 614: 1715226125040.9e721738a1a570bc5c0b6517af0e8dec.', STARTKEY => '005_458531_5320', ENDKEY => '005~'} 行 628: 1713940302372.b87acbb397753e23c469e70abe4fa9f9.', STARTKEY => '005~', ENDKEY => '006_2147745442_5063'} 行 642: 1714587432405.3f8e44ed581d262a1b259db2ff63318a.', STARTKEY => '006_2147745442_5063', ENDKEY => '006_21 行 656: 1714587432405.1f97f75c0fed08f834fb93b13e7aa811.', STARTKEY => '006_2148337090_6508', ENDKEY => '006_40 行 681: 1716332054802.c3b7bfa948f16dff94b3dedc1ec7f50d.', STARTKE
[jira] [Updated] (HBASE-28623) Scan with MultiRowRangeFilter very slow
[ https://issues.apache.org/jira/browse/HBASE-28623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie updated HBASE-28623: --- Description: when we *scan* a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find the too many scans {code:java} 1713987938886.93886cc52eea6200518feb7ebce7e1a4.', STARTKEY => '', ENDKEY => '000_2147757104_4641'} 行 139: 1716188377677.a2e0d724dd73196d81ecbfb58c77b611.', STARTKEY => '000_2147757104_4641', ENDKEY => '000_21 行 162: 1716188377677.b377942c957c300286afcb763f0dd338.', STARTKEY => '000_2148042968_3081', ENDKEY => '000_21 行 185: 1714319482833.4e5bfdfb6f2bcf381681726429bf2adb.', STARTKEY => '000_2148518165_26648', ENDKEY => '000_3 行 197: 1715031138715.36bac123de7eec3c4c08a775d592f387.', STARTKEY => '000_389786_4001', ENDKEY => '000_434112 行 211: 1715031138715.2dc9f1a78f532454ce8381ff9738e93e.', STARTKEY => '000_434112_88683', ENDKEY => '000~'} 行 225: 1713890960521.94e341a71b5b3e98569809d7a0f4354e.', STARTKEY => '000~', ENDKEY => '001_2147735632_4395'} 行 250: 1716239834572.3061c9f457b91ed40c938d801f8cac5f.', STARTKEY => '001_2147735632_4395', ENDKEY => '001_21 行 264: 1716239834572.e56a4d6aae43b5d42561e4ee6f0e3132.', STARTKEY => '001_2148043057_5975', ENDKEY => '001_23 行 278: 1714252181329.5de683912a8120bae9f37833fb286a30.', STARTKEY => '001_238065_2439', ENDKEY => '001_400433 行 292: 1714858026179.941a4921968267374876b52fdb33a1d7.', STARTKEY => '001_400433_45599', ENDKEY => '001_43429 行 306: 1714858026179.16e7de83bd7944e9d23b3568b14eaf9c.', STARTKEY => '001_434296_34588', ENDKEY => '001~'} 行 331: 1714082282269.6853c99dc6d17b2340e04307e5492d58.', STARTKEY => '001~', ENDKEY => '002_2147741550_785'} 行 345: 1714463331546.80f60ef11f1d337bcc09d7f24d390b28.', STARTKEY => '002_2147741550_785', ENDKEY => '002_214 行 359: 1714463331546.9281d964d08863aab2745f8331c148ad.', STARTKEY => '002_2148386148_27094', ENDKEY => '002_4 行 373: 1714685085875.2affd725c347399ad8c77eabd0a5d4f2.', STARTKEY => '002_400185_74884', ENDKEY => '002_45861 行 387: 1714685085875.910cbc03d1d8571f1eda21e3441f9359.', STARTKEY => '002_458618_25467', ENDKEY => '002~'} 行 401: 1714065682984.2358541c9c8d3f2f8c4496a1fd350c6c.', STARTKEY => '002~', ENDKEY => '003_2147739809_4985'} 行 415: 1716251410111.c60662b46cabd2cd0638d39796f11827.', STARTKEY => '003_2147739809_4985', ENDKEY => '003_21 行 429: 1716251410111.016507ab001379f86acdf0c40a5b93be.', STARTKEY => '003_2148024128_3054', ENDKEY => '003_21 行 443: 1714348539371.e7a41938549f7384192edd059d7e4a3e.', STARTKEY => '003_2148386097_25973', ENDKEY => '003_3 行 457: 1714925889818.a6c3c09cddd2c3e359c0f1497a302d6d.', STARTKEY => '003_396959_86147', ENDKEY => '003_45861 行 471: 1714925889818.eb98caf696d333714fc917c95839ea8e.', STARTKEY => '003_458619_61964', ENDKEY => '003~'} 行 485: 1713919439849.22b315f87ea850b2f1b052ccacf40a5c.', STARTKEY => '003~', ENDKEY => '004_2147804164_6378'} 行 499: 1714553829364.ee60c3e63e43e18487afa3ebd9db7890.', STARTKEY => '004_2147804164_6378', ENDKEY => '004_21 行 516: 1714553829364.30e09f836793166fb64f1799b63c56fc.', STARTKEY => '004_2148363241_1674', ENDKEY => '004_40 行 530: 1714831210652.05d86d46eb1717408f7b6d189c711b6d.', STARTKEY => '004_400633_98138', ENDKEY => '004_45953 行 544: 1714831210652.7ebc65054e3819ff8f3848108f07a1da.', STARTKEY => '004_459534_8710', ENDKEY => '004~'} 行 558: 1714049632767.4eb7c320ce17d5e6c79d37ad1235cd56.', STARTKEY => '004~', ENDKEY => '005_2147868266_5368'} 行 572: 1714364810854.f65ec5a2f28317951dab5e241d2e100f.', STARTKEY => '005_2147868266_5368', ENDKEY => '005_21 行 586: 1714364810854.f11c9bba0679a71b0b2893a44a8e188b.', STARTKEY => '005_2148453550_2383', ENDKEY => '005_40 行 600: 1715226125040.c01b757b47a6b845d2db236b31077995.', STARTKEY => '005_400582_60878', ENDKEY => '005_45853 行 614: 1715226125040.9e721738a1a570bc5c0b6517af0e8dec.', STARTKEY => '005_458531_5320', ENDKEY => '005~'} 行 628: 1713940302372.b87acbb397753e23c469e70abe4fa9f9.', STARTKEY => '005~', ENDKEY => '006_2147745442_5063'} 行 642: 1714587432405.3f8e44ed581d262a1b259db2ff63318a.', STARTKEY => '006_2147745442_5063', ENDKEY => '006_21 行 656: 1714587432405.1f97f75c0fed08f834fb93b13e7aa811.', STARTKEY => '006_2148337090_6508', ENDKEY => '006_40 行 681: 1716332054802.c3b7bfa948f16dff94b3dedc1ec7f50d.', STARTKE
[jira] [Created] (HBASE-28623) Scan with MultiRowRangeFilter very slow
chaijunjie created HBASE-28623: -- Summary: Scan with MultiRowRangeFilter very slow Key: HBASE-28623 URL: https://issues.apache.org/jira/browse/HBASE-28623 Project: HBase Issue Type: Bug Components: Client Affects Versions: 2.4.14 Reporter: chaijunjie when we *scan* a big table({*}more than 500 regions{*}) with {*}MultiRowRangeFilter{*}, it is very slow... it seems to {*}scan all regions{*}... for example: we scan 3 ranges.. startRow: 097_28220_ stopRow: 097_28220_~ startRow: 098_28221_ stopRow: 098_28221_~ startRow: 099_28222_ stopRow: 099_28222_~ and enable TRACE log in hbase client we find the too many scans -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28587) Remove deprecated methods in Cell
[ https://issues.apache.org/jira/browse/HBASE-28587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liangjun He reassigned HBASE-28587: --- Assignee: (was: Liangjun He) > Remove deprecated methods in Cell > - > > Key: HBASE-28587 > URL: https://issues.apache.org/jira/browse/HBASE-28587 > Project: HBase > Issue Type: Sub-task > Components: API, Client >Reporter: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28613) Use streaming when marshalling protobuf REST output
[ https://issues.apache.org/jira/browse/HBASE-28613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28613: Fix Version/s: 2.7.0 3.0.0-beta-2 2.6.1 2.5.9 Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to all active branches. > Use streaming when marshalling protobuf REST output > --- > > Key: HBASE-28613 > URL: https://issues.apache.org/jira/browse/HBASE-28613 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > We are currently marshalling protobuf into a byte array, and then send that > to the client. > This is both slow and memory intensive. > I see a ~25% reduction in the REST server CPU usage for my benchmark with > this patch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28622) FilterListWithAND can swallow SEEK_NEXT_USING_HINT
[ https://issues.apache.org/jira/browse/HBASE-28622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849925#comment-17849925 ] Istvan Toth commented on HBASE-28622: - Opened a [DISCUSS] thread on the topic. > FilterListWithAND can swallow SEEK_NEXT_USING_HINT > -- > > Key: HBASE-28622 > URL: https://issues.apache.org/jira/browse/HBASE-28622 > Project: HBase > Issue Type: Bug > Components: Filters >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > org.apache.hadoop.hbase.filter.FilterListWithAND.filterRowKey(Cell) will > return true if ANY of the filters returns true for Filter#filterRowKey(). > However, the SEEK_NEXT_USING_HINT mechanism relies on filterRowKey() > returning false, so that filterCell() can return SEEK_NEXT_USING_HINT. > If none of the filters matches, but one of them returns true for > filterRowKey(), then the filter(s) that returned to false, so that they can > return SEEK_NEXT_USING_HINT in filterCell() never get a chance to return > SEEK_NEXT_USING_HINT, and instead of seeking, FilterListWithAND will do very > slow full scan. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28621) PrefixFilter should use SEEK_NEXT_USING_HINT
[ https://issues.apache.org/jira/browse/HBASE-28621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28621: Description: Looking at PrefixFilter, I have noticed that it doesn't use the SEEK_NEXT_USING_HINT mechanism. AFAICT, we could safely set the the prefix as a next row hint, which could be a huge performance win. Of course, ideally the user would set the scan startRow to the prefix, which avoids the problem, but the user may forget to do that, or may use the filter in a filterList that doesn't allow for setting the start/stop rows close tho the prefix. was: Looking at PrefixFilter, I have noticed that it doesn't use the SEEK_NEXT_USING_HINT mechanism. AFAICT, we could safely set the the prefix as a next row hint, which could be a huge performance win. Of course, ideally the user would set the scan startRow to the prefix, which avoids the problem, if the user doesn't, then we effectively do a full scan until the prefix is reached. > PrefixFilter should use SEEK_NEXT_USING_HINT > - > > Key: HBASE-28621 > URL: https://issues.apache.org/jira/browse/HBASE-28621 > Project: HBase > Issue Type: Improvement > Components: Filters >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: beginner, beginner-friendly > > Looking at PrefixFilter, I have noticed that it doesn't use the > SEEK_NEXT_USING_HINT mechanism. > AFAICT, we could safely set the the prefix as a next row hint, which could be > a huge performance win. > Of course, ideally the user would set the scan startRow to the prefix, which > avoids the problem, but the user may forget to do that, or may use the filter > in a filterList that doesn't allow for setting the start/stop rows close tho > the prefix. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28621) PrefixFilter should use SEEK_NEXT_USING_HINT
[ https://issues.apache.org/jira/browse/HBASE-28621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28621: Labels: beginner beginner-friendly (was: ) > PrefixFilter should use SEEK_NEXT_USING_HINT > - > > Key: HBASE-28621 > URL: https://issues.apache.org/jira/browse/HBASE-28621 > Project: HBase > Issue Type: Improvement > Components: Filters >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: beginner, beginner-friendly > > Looking at PrefixFilter, I have noticed that it doesn't use the > SEEK_NEXT_USING_HINT mechanism. > AFAICT, we could safely set the the prefix as a next row hint, which could be > a huge performance win. > Of course, ideally the user would set the scan startRow to the prefix, which > avoids the problem, if the user doesn't, then we effectively do a full scan > until the prefix is reached. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-23578) [UI] Master UI shows long stack traces when table is broken
[ https://issues.apache.org/jira/browse/HBASE-23578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lambertus updated HBASE-23578: Reporter: Shuhei Yamasaki (was: Shuhei Yamasaki) > [UI] Master UI shows long stack traces when table is broken > --- > > Key: HBASE-23578 > URL: https://issues.apache.org/jira/browse/HBASE-23578 > Project: HBase > Issue Type: Improvement > Components: master, UI >Reporter: Shuhei Yamasaki >Assignee: Shuhei Yamasaki >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.2 > > Attachments: stackCompact1_short.png, table_jsp.png > > > The table.jsp in Master UI shows long stack traces when table is broken. > (shown as table_jsp.png) > This messages are hard to read and web page is very wide because stack traces > displayed in a single line. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27256) region.jsp shows incorrect size for split reference hfiles
[ https://issues.apache.org/jira/browse/HBASE-27256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Kumar Maheshwari reassigned HBASE-27256: --- Assignee: Vineet Kumar Maheshwari > region.jsp shows incorrect size for split reference hfiles > -- > > Key: HBASE-27256 > URL: https://issues.apache.org/jira/browse/HBASE-27256 > Project: HBase > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Vineet Kumar Maheshwari >Priority: Major > > When a region is split, the resulting daughter regions each refer back to > original store files for the original region. When viewing the region.jsp > page in the RegionServer UI, these show a size of 0. This is because the > region.jsp is directly checking the FileSystem length for the given storefile > path, which isn't aware of references/links. We should be able to update > region.jsp to call a variant of HStore.getStorefilesSize, which handles this > complexity. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27367) Create admin/shell api for reloading just HMaster configs
[ https://issues.apache.org/jira/browse/HBASE-27367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Kumar Maheshwari reassigned HBASE-27367: --- Assignee: Vineet Kumar Maheshwari > Create admin/shell api for reloading just HMaster configs > - > > Key: HBASE-27367 > URL: https://issues.apache.org/jira/browse/HBASE-27367 > Project: HBase > Issue Type: New Feature >Reporter: Bryan Beaudreault >Assignee: Vineet Kumar Maheshwari >Priority: Major > > We have {{update_config}} and {{{}update_all_config{}}}. The former can do an > individual host (RS or HMaster), the latter does all hosts in the cluster. > If you just want to reload HMaster(s) you need to go into JMX metrics to find > the tag.ServerName for each HMaster, and then paste that into individual > update_config calls. > We could either add a new {{update_hmaster_config}} or add an argument to the > existing {{{}update_all_config 'hmaster'{}}}. Whatever way we go, we should > add a corresponding method in Admin as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27515) Deprecate getStats and setStatistics in Result
[ https://issues.apache.org/jira/browse/HBASE-27515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Kumar Maheshwari reassigned HBASE-27515: --- Assignee: Vineet Kumar Maheshwari > Deprecate getStats and setStatistics in Result > -- > > Key: HBASE-27515 > URL: https://issues.apache.org/jira/browse/HBASE-27515 > Project: HBase > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Vineet Kumar Maheshwari >Priority: Minor > > setStatistics is already IA.Private, but it has never been called. It > replaced a deprecated addResults method, which was removed in HBASE-14703. > The getter getStats() now always returns a null value, since nothing in the > codebase ever calls setStatistics. > We should deprecate these methods to keep our API clear and concise. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-27699) User metrics for filtered and read rows are too expensive
[ https://issues.apache.org/jira/browse/HBASE-27699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Kumar Maheshwari reassigned HBASE-27699: --- Assignee: Vineet Kumar Maheshwari > User metrics for filtered and read rows are too expensive > - > > Key: HBASE-27699 > URL: https://issues.apache.org/jira/browse/HBASE-27699 > Project: HBase > Issue Type: Improvement >Reporter: Bryan Beaudreault >Assignee: Vineet Kumar Maheshwari >Priority: Major > > MetricsUserAggregateImpl has a pattern like this: > {code:java} > String user = getActiveUser(); > if (user != null) { > MetricsUserSource userSource = getOrCreateMetricsUser(user); > incrementFilteredReadRequests(userSource); > } {code} > So every update involves a getOrCreate call, which does a ConcurrentHashMap > lookup. This overhead is not too bad for most requests, because it's just > executed once per request (i.e. updatePut gets called once at the end, though > multi's it happens for every action). > For updateFilteredReadRequests and updateReadRequestCount, these are > currently called in RegionScannerImpl for every row scanned or filtered. > Doing the map lookup over and over adds up. Profiling the regionserver under > load, I see over 5% of the time spent updating these metrics. > We should try to collect these metrics maybe in the RpcCallContext, and the > translate into user metrics once at the end of the request. Or otherwise find > a way to minimize querying the ConcurrentHashMap multiple times in the > context of a request. Maybe we should actually stash the MetricsUserSource in > the RpcCallContext so that all user metrics only need to do the lookup once, > even for multi's. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28538) BackupHFileCleaner.loadHFileRefs is very expensive
[ https://issues.apache.org/jira/browse/HBASE-28538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849813#comment-17849813 ] Vineet Kumar Maheshwari commented on HBASE-28538: - [~bbeaudreault] Can you please attach the profile data for this issue? > BackupHFileCleaner.loadHFileRefs is very expensive > -- > > Key: HBASE-28538 > URL: https://issues.apache.org/jira/browse/HBASE-28538 > Project: HBase > Issue Type: Bug > Components: backuprestore >Reporter: Bryan Beaudreault >Priority: Major > > I noticed some odd CPU spikes on the hmasters of one of our clusters. Turns > out it had been getting lots of bulkoads (30k) and processing them was > expensive. The method scans hbase and then parses the paths. Surprisingly the > parsing is more expensive than the reading hbase, with the vast majority of > time spent in org/apache/hadoop/fs/Path.. > We should see if this is possible to be optimized. Attaching profile. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28622) FilterListWithAND can swallow SEEK_NEXT_USING_HINT
Istvan Toth created HBASE-28622: --- Summary: FilterListWithAND can swallow SEEK_NEXT_USING_HINT Key: HBASE-28622 URL: https://issues.apache.org/jira/browse/HBASE-28622 Project: HBase Issue Type: Bug Components: Filters Reporter: Istvan Toth Assignee: Istvan Toth org.apache.hadoop.hbase.filter.FilterListWithAND.filterRowKey(Cell) will return true if ANY of the filters returns true for Filter#filterRowKey(). However, the SEEK_NEXT_USING_HINT mechanism relies on filterRowKey() returning false, so that filterCell() can return SEEK_NEXT_USING_HINT. If none of the filters matches, but one of them returns true for filterRowKey(), then the filter(s) that returned to false, so that they can return SEEK_NEXT_USING_HINT in filterCell() never get a chance to return SEEK_NEXT_USING_HINT, and instead of seeking, FilterListWithAND will do very slow full scan. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28621) PrefixFilter should use SEEK_NEXT_USING_HINT
Istvan Toth created HBASE-28621: --- Summary: PrefixFilter should use SEEK_NEXT_USING_HINT Key: HBASE-28621 URL: https://issues.apache.org/jira/browse/HBASE-28621 Project: HBase Issue Type: Improvement Components: Filters Reporter: Istvan Toth Assignee: Istvan Toth Looking at PrefixFilter, I have noticed that it doesn't use the SEEK_NEXT_USING_HINT mechanism. AFAICT, we could safely set the the prefix as a next row hint, which could be a huge performance win. Of course, ideally the user would set the scan startRow to the prefix, which avoids the problem, if the user doesn't, then we effectively do a full scan until the prefix is reached. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28522) UNASSIGN proc indefinitely stuck on dead rs
[ https://issues.apache.org/jira/browse/HBASE-28522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849757#comment-17849757 ] Prathyusha edited comment on HBASE-28522 at 5/27/24 3:25 PM: - [~zhangduo] Even if we introduce a procedure like CloseTableRegionsProcedure in HBASE-28582 here, even though we put a logic to wait for the current rit(instead of creating a new child UNASSIGN directly), every TRSP(which just tried to start execute) will be blocked on trying to get the shared lock on Table (DTP holding exclusive lock) so they wont finish right? Or you mean if we go via this approach >If not, let's change to use the same solution in HBASE-28582, i.e, introduce a >special >CloseTableRegionsProcedure, to close all regions for a table. we have the holdLock as false for Table? An orthogonal thought - can we somehow add them(current RIT TRSPs) also as child procs to this? so that they can get the shared lock to table? cause CloseTableRegionsProcedure is anyway waiting on them to finish. Or if not child procs, another field like dependent procedures and those also have access to shared lock of the resources it holds was (Author: prathyu6): [~zhangduo] Even if we introduce a procedure like CloseTableRegionsProcedure in HBASE-28582 here, even though we put a logic to wait for the current rit(instead of creating a new child UNASSIGN directly), every TRSP(which just tried to start execute) will be blocked on trying to get the shared lock on Table (DTP holding exclusive lock) so they wont finish right? Or you mean if we go via this approach >If not, let's change to use the same solution in HBASE-28582, i.e, introduce a >special >CloseTableRegionsProcedure, to close all regions for a table. we have the holdLock as false for Table? An orthogonal thought - can we somehow add them(current RIT TRSPs) also as child procs to this? so that they can get the shared lock to table? cause CloseTableRegionsProcedure is anyway waiting on them to finish. > UNASSIGN proc indefinitely stuck on dead rs > --- > > Key: HBASE-28522 > URL: https://issues.apache.org/jira/browse/HBASE-28522 > Project: HBase > Issue Type: Improvement > Components: proc-v2, Region Assignment >Reporter: Prathyusha >Assignee: Prathyusha >Priority: Critical > Attachments: timeline.jpg > > > One scenario we noticed in production - > we had DisableTableProc and SCP almost triggered at similar time > 2024-03-16 17:59:23,014 INFO [PEWorker-11] procedure.DisableTableProcedure - > Set to state=DISABLING > 2024-03-16 17:59:15,243 INFO [PEWorker-26] procedure.ServerCrashProcedure - > Start pid=21592440, state=RUNNABLE:SERVER_CRASH_START, locked=true; > ServerCrashProcedure > , splitWal=true, meta=false > DisabeTableProc creates unassign procs, and at this time ASSIGNs of SCP is > not completed > {{2024-03-16 17:59:23,003 DEBUG [PEWorker-40] procedure2.ProcedureExecutor - > LOCK_EVENT_WAIT pid=21594220, ppid=21592440, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; > TransitRegionStateProcedure table=, region=, ASSIGN}} > UNASSIGN created by DisableTableProc is stuck on the dead regionserver and we > had to manually bypass unassign of DisableTableProc and then do ASSIGN. > If we can break the loop for UNASSIGN procedure to not retry if there is scp > for that server, we do not need manual intervention?, at least the > DisableTableProc can go to a rollback state? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28522) UNASSIGN proc indefinitely stuck on dead rs
[ https://issues.apache.org/jira/browse/HBASE-28522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849757#comment-17849757 ] Prathyusha commented on HBASE-28522: [~zhangduo] Even if we introduce a procedure like CloseTableRegionsProcedure in HBASE-28582 here, even though we put a logic to wait for the current rit(instead of creating a new child UNASSIGN directly), every TRSP(which just tried to start execute) will be blocked on trying to get the shared lock on Table (DTP holding exclusive lock) so they wont finish right? Or you mean if we go via this approach >If not, let's change to use the same solution in HBASE-28582, i.e, introduce a >special >CloseTableRegionsProcedure, to close all regions for a table. we have the holdLock as false for Table? An orthogonal thought - can we somehow add them(current RIT TRSPs) also as child procs to this? so that they can get the shared lock to table? cause CloseTableRegionsProcedure is anyway waiting on them to finish. > UNASSIGN proc indefinitely stuck on dead rs > --- > > Key: HBASE-28522 > URL: https://issues.apache.org/jira/browse/HBASE-28522 > Project: HBase > Issue Type: Improvement > Components: proc-v2, Region Assignment >Reporter: Prathyusha >Assignee: Prathyusha >Priority: Critical > Attachments: timeline.jpg > > > One scenario we noticed in production - > we had DisableTableProc and SCP almost triggered at similar time > 2024-03-16 17:59:23,014 INFO [PEWorker-11] procedure.DisableTableProcedure - > Set to state=DISABLING > 2024-03-16 17:59:15,243 INFO [PEWorker-26] procedure.ServerCrashProcedure - > Start pid=21592440, state=RUNNABLE:SERVER_CRASH_START, locked=true; > ServerCrashProcedure > , splitWal=true, meta=false > DisabeTableProc creates unassign procs, and at this time ASSIGNs of SCP is > not completed > {{2024-03-16 17:59:23,003 DEBUG [PEWorker-40] procedure2.ProcedureExecutor - > LOCK_EVENT_WAIT pid=21594220, ppid=21592440, > state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; > TransitRegionStateProcedure table=, region=, ASSIGN}} > UNASSIGN created by DisableTableProc is stuck on the dead regionserver and we > had to manually bypass unassign of DisableTableProc and then do ASSIGN. > If we can break the loop for UNASSIGN procedure to not retry if there is scp > for that server, we do not need manual intervention?, at least the > DisableTableProc can go to a rollback state? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28620) replication quota leak when peer changes
[ https://issues.apache.org/jira/browse/HBASE-28620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MisterWang updated HBASE-28620: --- Description: When peer changes, replication closes the reader and shipper created earlier. However, after the specified timeout, the shipper still does not automatically close (It was interrupted, but it didn't close properly). The existing code simply returns without releasing quota. Not cleaning buffer usage. In one practice of my company, in this case, the quota was full because it was not released in time, so wal reader could not continue read new data and replication had a backlog. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B was: When peer changes, replication closes the reader and shipper created earlier. However, after the specified timeout, the shipper still does not automatically close. The existing code simply returns without releasing quota. Not cleaning buffer usage. In one practice of my company, in this case, the quota was full because it was not released in time, so wal reader could not continue read new data and replication had a backlog. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B > replication quota leak when peer changes > > > Key: HBASE-28620 > URL: https://issues.apache.org/jira/browse/HBASE-28620 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: MisterWang >Priority: Critical > Labels: pull-request-available > > When peer changes, replication closes the reader and shipper created earlier. > However, after the specified timeout, the shipper still does not > automatically close (It was interrupted, but it didn't close properly). The > existing code simply returns without releasing quota. Not cleaning buffer > usage. > In one practice of my company, in this case, the quota was full because it > was not released in time, so wal reader could not continue read new data and > replication had a backlog. > > The log is as follows: > 2024-05-20 20:00:00,796 WARN > [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] > regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method > timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer > usage. Shipper alive: peer1; Reader alive: false > 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as > buffer usage 268435456B exceeds limit 268435456B -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28620) replication quota leak when peer changes
[ https://issues.apache.org/jira/browse/HBASE-28620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28620: --- Labels: pull-request-available (was: ) > replication quota leak when peer changes > > > Key: HBASE-28620 > URL: https://issues.apache.org/jira/browse/HBASE-28620 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: MisterWang >Priority: Critical > Labels: pull-request-available > > When peer changes, replication closes the reader and shipper created earlier. > However, after the specified timeout, the shipper still does not > automatically close. The existing code simply returns without releasing > quota. Not cleaning buffer usage. > In one practice of my company, in this case, the quota was full because it > was not released in time, so wal reader could not continue read new data and > replication had a backlog. > > The log is as follows: > 2024-05-20 20:00:00,796 WARN > [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] > regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method > timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer > usage. Shipper alive: peer1; Reader alive: false > 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as > buffer usage 268435456B exceeds limit 268435456B -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28620) replication quota leak when peer changes
[ https://issues.apache.org/jira/browse/HBASE-28620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] MisterWang updated HBASE-28620: --- Description: When peer changes, replication closes the reader and shipper created earlier. However, after the specified timeout, the shipper still does not automatically close. The existing code simply returns without releasing quota. Not cleaning buffer usage. In one practice of my company, in this case, the quota was full because it was not released in time, so wal reader could not continue read new data and replication had a backlog. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B was: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop when peer changes. Not cleaning buffer usage. When the amount of data written to the table in the peer is relatively large, the quota is already full and has not been released, resulting in the wall reader being unable to read new data. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B > replication quota leak when peer changes > > > Key: HBASE-28620 > URL: https://issues.apache.org/jira/browse/HBASE-28620 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: MisterWang >Priority: Critical > > When peer changes, replication closes the reader and shipper created earlier. > However, after the specified timeout, the shipper still does not > automatically close. The existing code simply returns without releasing > quota. Not cleaning buffer usage. > In one practice of my company, in this case, the quota was full because it > was not released in time, so wal reader could not continue read new data and > replication had a backlog. > > The log is as follows: > 2024-05-20 20:00:00,796 WARN > [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] > regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method > timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer > usage. Shipper alive: peer1; Reader alive: false > 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as > buffer usage 268435456B exceeds limit 268435456B -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28620) replication quota leak when peer changes
MisterWang created HBASE-28620: -- Summary: replication quota leak when peer changes Key: HBASE-28620 URL: https://issues.apache.org/jira/browse/HBASE-28620 Project: HBase Issue Type: Bug Components: Replication Reporter: MisterWang Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop when peer changes. Not cleaning buffer usage. When the amount of data written to the table in the peer is relatively large, the quota is already full and has not been released, resulting in the wall reader being unable to read new data. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28612: --- Labels: pull-request-available (was: ) > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug >Reporter: wangxin >Priority: Minor > Labels: pull-request-available > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28619) Fix the inaccurate message when snapshot doesn't exist
[ https://issues.apache.org/jira/browse/HBASE-28619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28619: --- Labels: pull-request-available (was: ) > Fix the inaccurate message when snapshot doesn't exist > -- > > Key: HBASE-28619 > URL: https://issues.apache.org/jira/browse/HBASE-28619 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.4.13 >Reporter: guluo >Priority: Minor > Labels: pull-request-available > > We would get the as following message when restore a non-existing snapshot. > {code:java} > hbase:021:0> restore_snapshot 'non_existing_snap' > ERROR: Unable to find the table name for snapshot=non_existing_snap > For usage try 'help "restore_snapshot"' > Took 0.0170 seconds {code} > > ERROR: {color:#FF}Unable to find the table{color} name for > snapshot=non_existing_snap > This error message is inaccurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28619) Fix the inaccurate message when snapshot doesn't exist
[ https://issues.apache.org/jira/browse/HBASE-28619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] guluo updated HBASE-28619: -- Description: We would get the as following message when restore a non-existing snapshot. {code:java} hbase:021:0> restore_snapshot 'non_existing_snap' ERROR: Unable to find the table name for snapshot=non_existing_snap For usage try 'help "restore_snapshot"' Took 0.0170 seconds {code} ERROR: {color:#FF}Unable to find the table{color} name for snapshot=non_existing_snap This error message is inaccurate. was: We would get the as following message when restore a non-existing snapshot. {code:java} hbase:021:0> restore_snapshot 'non_existing_snap' ERROR: Unable to find the table name for snapshot=non_existing_snap For usage try 'help "restore_snapshot"' Took 0.0170 seconds {code} This error message is inaccurate. > Fix the inaccurate message when snapshot doesn't exist > -- > > Key: HBASE-28619 > URL: https://issues.apache.org/jira/browse/HBASE-28619 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.4.13 >Reporter: guluo >Priority: Minor > > We would get the as following message when restore a non-existing snapshot. > {code:java} > hbase:021:0> restore_snapshot 'non_existing_snap' > ERROR: Unable to find the table name for snapshot=non_existing_snap > For usage try 'help "restore_snapshot"' > Took 0.0170 seconds {code} > > ERROR: {color:#FF}Unable to find the table{color} name for > snapshot=non_existing_snap > This error message is inaccurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28619) Fix the inaccurate message when snapshot doesn't exist
guluo created HBASE-28619: - Summary: Fix the inaccurate message when snapshot doesn't exist Key: HBASE-28619 URL: https://issues.apache.org/jira/browse/HBASE-28619 Project: HBase Issue Type: Bug Components: snapshots Affects Versions: 2.4.13 Reporter: guluo We would get the as following message when restore a non-existing snapshot. {code:java} hbase:021:0> restore_snapshot 'non_existing_snap' ERROR: Unable to find the table name for snapshot=non_existing_snap For usage try 'help "restore_snapshot"' Took 0.0170 seconds {code} This error message is inaccurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28174) DELETE endpoint in REST API does not support deleting binary row keys/columns
[ https://issues.apache.org/jira/browse/HBASE-28174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie reassigned HBASE-28174: -- Assignee: James Udiljak > DELETE endpoint in REST API does not support deleting binary row keys/columns > - > > Key: HBASE-28174 > URL: https://issues.apache.org/jira/browse/HBASE-28174 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6, 4.0.0-alpha-1 >Reporter: James Udiljak >Assignee: James Udiljak >Priority: Blocker > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.9 > > Attachments: delete_base64_1.png > > > h2. Notes > This is the first time I have raised an issue in the ASF Jira. Please let me > know if there's anything I need to adjust on the issue to fit in with your > development flow. > I have marked the priority as "blocker" because this issue blocks me as a > user of the HBase REST API from deploying an effective solution for our > setup. Please feel free to change this if the Priority field has another > meaning to you. > I have also chosen 2.4.17 as the affected version because this is the version > I am running, however looking at the source code on GitHub in the default > branch, I think many other versions would be affected. > h2. Description of Issue > The DELETE operation in the [HBase REST > API|https://hbase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_delete] > requires specifying row keys and column families/offsets in the URI (i.e. as > UTF-8 text). This makes it impossible to specify a delete operation via the > REST API for a binary row key or column family/offset, as single bytes with a > decimal value greater than 127 are not valid in UTF-8. > Percent-encoding these "high" values does not work around the issue, as the > HBase REST API uses Java's {{URLDecoder.Decode(percentEncodedString, > "UTF-8")}} function, which replaces any percent-encoded byte in the range > {{%80}} to {{%FF}} with the [replacement > character|https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character]. > Even if this were not the case, the row-key is ultimately [converted to a > byte > array|https://github.com/apache/hbase/blob/rel/2.4.17/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java#L60-L100] > using UTF-8 encoding, wherein code points >127 are encoded across multiple > bytes, corrupting the user-supplied row key. > h2. Proposed Solution > I do not believe it is possible to allow encoding of arbitrary bytes in the > URL for the DELETE endpoint without breaking compatibility for any users who > may have been unknowingly UTF-8 encoding their binary row keys. Even if it > were possible, the syntax would likely be terse. > Instead, I propose a new version of the DELETE endpoint that would accept row > keys and column families/offsets in the request _body_ (using Base64 encoding > for the JSON and XML formats, and bare binary for protobuf). This new > endpoint would follow the same conventions as the PUT operations, except that > cell values would not need to be specified (unless the user is performing a > check-and-delete operation). > As an additional benefit, using the request body could potentially allow for > deleting multiple rows in a single request, which would drastically improve > the efficiency of my use case. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28174) DELETE endpoint in REST API does not support deleting binary row keys/columns
[ https://issues.apache.org/jira/browse/HBASE-28174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie reassigned HBASE-28174: -- Assignee: (was: chaijunjie) > DELETE endpoint in REST API does not support deleting binary row keys/columns > - > > Key: HBASE-28174 > URL: https://issues.apache.org/jira/browse/HBASE-28174 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6, 4.0.0-alpha-1 >Reporter: James Udiljak >Priority: Blocker > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.9 > > Attachments: delete_base64_1.png > > > h2. Notes > This is the first time I have raised an issue in the ASF Jira. Please let me > know if there's anything I need to adjust on the issue to fit in with your > development flow. > I have marked the priority as "blocker" because this issue blocks me as a > user of the HBase REST API from deploying an effective solution for our > setup. Please feel free to change this if the Priority field has another > meaning to you. > I have also chosen 2.4.17 as the affected version because this is the version > I am running, however looking at the source code on GitHub in the default > branch, I think many other versions would be affected. > h2. Description of Issue > The DELETE operation in the [HBase REST > API|https://hbase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_delete] > requires specifying row keys and column families/offsets in the URI (i.e. as > UTF-8 text). This makes it impossible to specify a delete operation via the > REST API for a binary row key or column family/offset, as single bytes with a > decimal value greater than 127 are not valid in UTF-8. > Percent-encoding these "high" values does not work around the issue, as the > HBase REST API uses Java's {{URLDecoder.Decode(percentEncodedString, > "UTF-8")}} function, which replaces any percent-encoded byte in the range > {{%80}} to {{%FF}} with the [replacement > character|https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character]. > Even if this were not the case, the row-key is ultimately [converted to a > byte > array|https://github.com/apache/hbase/blob/rel/2.4.17/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java#L60-L100] > using UTF-8 encoding, wherein code points >127 are encoded across multiple > bytes, corrupting the user-supplied row key. > h2. Proposed Solution > I do not believe it is possible to allow encoding of arbitrary bytes in the > URL for the DELETE endpoint without breaking compatibility for any users who > may have been unknowingly UTF-8 encoding their binary row keys. Even if it > were possible, the syntax would likely be terse. > Instead, I propose a new version of the DELETE endpoint that would accept row > keys and column families/offsets in the request _body_ (using Base64 encoding > for the JSON and XML formats, and bare binary for protobuf). This new > endpoint would follow the same conventions as the PUT operations, except that > cell values would not need to be specified (unless the user is performing a > check-and-delete operation). > As an additional benefit, using the request body could potentially allow for > deleting multiple rows in a single request, which would drastically improve > the efficiency of my use case. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28174) DELETE endpoint in REST API does not support deleting binary row keys/columns
[ https://issues.apache.org/jira/browse/HBASE-28174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chaijunjie reassigned HBASE-28174: -- Assignee: chaijunjie (was: James Udiljak) > DELETE endpoint in REST API does not support deleting binary row keys/columns > - > > Key: HBASE-28174 > URL: https://issues.apache.org/jira/browse/HBASE-28174 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6, 4.0.0-alpha-1 >Reporter: James Udiljak >Assignee: chaijunjie >Priority: Blocker > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.9 > > Attachments: delete_base64_1.png > > > h2. Notes > This is the first time I have raised an issue in the ASF Jira. Please let me > know if there's anything I need to adjust on the issue to fit in with your > development flow. > I have marked the priority as "blocker" because this issue blocks me as a > user of the HBase REST API from deploying an effective solution for our > setup. Please feel free to change this if the Priority field has another > meaning to you. > I have also chosen 2.4.17 as the affected version because this is the version > I am running, however looking at the source code on GitHub in the default > branch, I think many other versions would be affected. > h2. Description of Issue > The DELETE operation in the [HBase REST > API|https://hbase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_delete] > requires specifying row keys and column families/offsets in the URI (i.e. as > UTF-8 text). This makes it impossible to specify a delete operation via the > REST API for a binary row key or column family/offset, as single bytes with a > decimal value greater than 127 are not valid in UTF-8. > Percent-encoding these "high" values does not work around the issue, as the > HBase REST API uses Java's {{URLDecoder.Decode(percentEncodedString, > "UTF-8")}} function, which replaces any percent-encoded byte in the range > {{%80}} to {{%FF}} with the [replacement > character|https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character]. > Even if this were not the case, the row-key is ultimately [converted to a > byte > array|https://github.com/apache/hbase/blob/rel/2.4.17/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java#L60-L100] > using UTF-8 encoding, wherein code points >127 are encoded across multiple > bytes, corrupting the user-supplied row key. > h2. Proposed Solution > I do not believe it is possible to allow encoding of arbitrary bytes in the > URL for the DELETE endpoint without breaking compatibility for any users who > may have been unknowingly UTF-8 encoding their binary row keys. Even if it > were possible, the syntax would likely be terse. > Instead, I propose a new version of the DELETE endpoint that would accept row > keys and column families/offsets in the request _body_ (using Base64 encoding > for the JSON and XML formats, and bare binary for protobuf). This new > endpoint would follow the same conventions as the PUT operations, except that > cell values would not need to be specified (unless the user is performing a > check-and-delete operation). > As an additional benefit, using the request body could potentially allow for > deleting multiple rows in a single request, which would drastically improve > the efficiency of my use case. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849576#comment-17849576 ] Hudson commented on HBASE-28473: Results for branch master [build #1081 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1081/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1081/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1081/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1081/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Priority: Minor (was: Major) > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug >Reporter: wangxin >Priority: Minor > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Component/s: (was: Replication) > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug >Reporter: wangxin >Priority: Major > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin resolved HBASE-28612. - Resolution: Abandoned > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: wangxin >Priority: Major > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Labels: (was: pull-request-available) > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: wangxin >Priority: Major > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) test
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Summary: test (was: replication quota leak when peer changes) > test > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: wangxin >Priority: Major > Labels: pull-request-available > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) replication quota leak when peer changes
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Description: 111 (was: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop when peer changes. Not cleaning buffer usage. When the amount of data written to the table in the peer is relatively large, the quota is already full and has not been released, resulting in the wall reader being unable to read new data. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B ) > replication quota leak when peer changes > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: wangxin >Priority: Major > Labels: pull-request-available > > 111 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28401) Introduce a close method for memstore for release active segment
[ https://issues.apache.org/jira/browse/HBASE-28401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849511#comment-17849511 ] Duo Zhang commented on HBASE-28401: --- [~vjasani] [~bbeaudreault] Any updates here? Do we still see memory leak in logs after this patch? Thanks. > Introduce a close method for memstore for release active segment > > > Key: HBASE-28401 > URL: https://issues.apache.org/jira/browse/HBASE-28401 > Project: HBase > Issue Type: Sub-task > Components: in-memory-compaction, regionserver >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > > Per the analysis in parent issue, we will always have an active segment in > memstore even if it is empty, so if we do not call close on it, it will lead > to a netty leak warning message. > Although there is no real memory leak for this case, we'd better still fix it > as it may hide other memory leak problem. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28618) The hadolint check in nightly build is broken
[ https://issues.apache.org/jira/browse/HBASE-28618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849510#comment-17849510 ] Duo Zhang commented on HBASE-28618: --- [~wchevreuil] [~taklwu] I'm not very familiar with the Dockerfile rules but at least we should disable the hadolint check for this line? Thanks. > The hadolint check in nightly build is broken > - > > Key: HBASE-28618 > URL: https://issues.apache.org/jira/browse/HBASE-28618 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Duo Zhang >Priority: Major > > https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/General_20Nightly_20Build_20Report/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27915) Update hbase_docker with an extra Dockerfile compatible with mac m1 platfrom
[ https://issues.apache.org/jira/browse/HBASE-27915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-27915: -- Fix Version/s: 2.7.0 3.0.0-beta-2 2.6.1 2.5.9 > Update hbase_docker with an extra Dockerfile compatible with mac m1 platfrom > > > Key: HBASE-27915 > URL: https://issues.apache.org/jira/browse/HBASE-27915 > Project: HBase > Issue Type: Bug >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > > When trying to use the current Dockerfile under "./dev-support/hbase_docker" > on m1 macs, the docker build fails at the git clone & mvn build stage with > below error: > {noformat} > #0 8.214 qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such > file or directory > {noformat} > It turns out for mac m1, we have to explicitly define the platform flag for > the ubuntu image. I thought we could add a note in this readme, together with > an "m1" subfolder containing a modified copy of this Dockerfile that works on > mac m1s. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28618) The hadolint check in nightly build is broken
Duo Zhang created HBASE-28618: - Summary: The hadolint check in nightly build is broken Key: HBASE-28618 URL: https://issues.apache.org/jira/browse/HBASE-28618 Project: HBase Issue Type: Bug Components: scripts Reporter: Duo Zhang https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/General_20Nightly_20Build_20Report/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28614) Introduce a field to display whether the snapshot is expired
[ https://issues.apache.org/jira/browse/HBASE-28614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] guluo reassigned HBASE-28614: - Assignee: guluo > Introduce a field to display whether the snapshot is expired > > > Key: HBASE-28614 > URL: https://issues.apache.org/jira/browse/HBASE-28614 > Project: HBase > Issue Type: Improvement > Components: shell, snapshots, UI > Environment: hbase master >Reporter: guluo >Assignee: guluo >Priority: Minor > Labels: pull-request-available > > HBase supports to create snapshot with TTL, and expired snapshots will be > periodically deleted. > This period is 30 min by default, as follow. > {code:java} > private static final String SNAPSHOT_CLEANER_INTERVAL = > "hbase.master.cleaner.snapshot.interval"; > private static final int SNAPSHOT_CLEANER_DEFAULT_INTERVAL = 1800 * 1000; // > Default 30 min {code} > > Therefore, the following situation may occur: > The expired snapshot would still exist for a period of time on hbase cluster, > and would not be deleted until the next operation of the periodic thread. > So, Sometimes, we may use the expired snapshot because we donot know whether > the snapshot is expired. > > So, I think we can introduce a expired field for this situation in HBase UI. > And on hbase shell ,adding snapshot TTL info and displaying expired if the > snaphost has already expired. > Or any better suggestions? Thanks a lot! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28614) Introduce a field to display whether the snapshot is expired
[ https://issues.apache.org/jira/browse/HBASE-28614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28614: --- Labels: pull-request-available (was: ) > Introduce a field to display whether the snapshot is expired > > > Key: HBASE-28614 > URL: https://issues.apache.org/jira/browse/HBASE-28614 > Project: HBase > Issue Type: Improvement > Components: shell, snapshots, UI > Environment: hbase master >Reporter: guluo >Priority: Minor > Labels: pull-request-available > > HBase supports to create snapshot with TTL, and expired snapshots will be > periodically deleted. > This period is 30 min by default, as follow. > {code:java} > private static final String SNAPSHOT_CLEANER_INTERVAL = > "hbase.master.cleaner.snapshot.interval"; > private static final int SNAPSHOT_CLEANER_DEFAULT_INTERVAL = 1800 * 1000; // > Default 30 min {code} > > Therefore, the following situation may occur: > The expired snapshot would still exist for a period of time on hbase cluster, > and would not be deleted until the next operation of the periodic thread. > So, Sometimes, we may use the expired snapshot because we donot know whether > the snapshot is expired. > > So, I think we can introduce a expired field for this situation in HBase UI. > And on hbase shell ,adding snapshot TTL info and displaying expired if the > snaphost has already expired. > Or any better suggestions? Thanks a lot! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28425) Allow specify cluster key without zookeeper in replication
[ https://issues.apache.org/jira/browse/HBASE-28425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849492#comment-17849492 ] Hudson commented on HBASE-28425: Results for branch master [build #1080 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Allow specify cluster key without zookeeper in replication > -- > > Key: HBASE-28425 > URL: https://issues.apache.org/jira/browse/HBASE-28425 > Project: HBase > Issue Type: Improvement > Components: Replication, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > When reviewing the usage of zookeeper in HBase, I found out that, we still > rely on zookeeper when specifying the cluster key when setting up > replication. If we want to completely hide zookeeper from outside a cluster, > we should also remove this cluster key. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28615) Bump requests from 2.31.0 to 2.32.2 in /dev-support/git-jira-release-audit
[ https://issues.apache.org/jira/browse/HBASE-28615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849491#comment-17849491 ] Hudson commented on HBASE-28615: Results for branch master [build #1080 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1080/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bump requests from 2.31.0 to 2.32.2 in /dev-support/git-jira-release-audit > -- > > Key: HBASE-28615 > URL: https://issues.apache.org/jira/browse/HBASE-28615 > Project: HBase > Issue Type: Task > Components: dependabot, scripts, security >Reporter: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28425) Allow specify cluster key without zookeeper in replication
[ https://issues.apache.org/jira/browse/HBASE-28425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849490#comment-17849490 ] Hudson commented on HBASE-28425: Results for branch branch-3 [build #213 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/213/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/213/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/213/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/213/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Allow specify cluster key without zookeeper in replication > -- > > Key: HBASE-28425 > URL: https://issues.apache.org/jira/browse/HBASE-28425 > Project: HBase > Issue Type: Improvement > Components: Replication, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > When reviewing the usage of zookeeper in HBase, I found out that, we still > rely on zookeeper when specifying the cluster key when setting up > replication. If we want to completely hide zookeeper from outside a cluster, > we should also remove this cluster key. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28616: -- Status: Patch Available (was: Open) > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
[ https://issues.apache.org/jira/browse/HBASE-28616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28616: --- Labels: pull-request-available (was: ) > Remove/Deprecated the rs.* related configuration in TableOutputFormat > - > > Key: HBASE-28616 > URL: https://issues.apache.org/jira/browse/HBASE-28616 > Project: HBase > Issue Type: Task > Components: mapreduce >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28474. --- Resolution: Fixed Done. > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28471) Release 2.4.18
[ https://issues.apache.org/jira/browse/HBASE-28471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28471. --- Resolution: Fixed Done. > Release 2.4.18 > -- > > Key: HBASE-28471 > URL: https://issues.apache.org/jira/browse/HBASE-28471 > Project: HBase > Issue Type: Umbrella > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > The 2.6.0 release vote is ongoing, let's release 2.4.18 and mark 2.4.x as EOL. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849451#comment-17849451 ] Duo Zhang edited comment on HBASE-28474 at 5/25/24 2:54 PM: # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) (/) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email (/) was (Author: apache9): # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) (/) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28617) Add trademark statement in footer on our website
Duo Zhang created HBASE-28617: - Summary: Add trademark statement in footer on our website Key: HBASE-28617 URL: https://issues.apache.org/jira/browse/HBASE-28617 Project: HBase Issue Type: Task Components: website Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28616) Remove/Deprecated the rs.* related configuration in TableOutputFormat
Duo Zhang created HBASE-28616: - Summary: Remove/Deprecated the rs.* related configuration in TableOutputFormat Key: HBASE-28616 URL: https://issues.apache.org/jira/browse/HBASE-28616 Project: HBase Issue Type: Task Components: mapreduce Reporter: Duo Zhang Assignee: Duo Zhang Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28473. --- Fix Version/s: 4.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Merged to master. Thanks [~sunxin] for reviewing! > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849451#comment-17849451 ] Duo Zhang edited comment on HBASE-28474 at 5/25/24 12:55 PM: - # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) (/) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email was (Author: apache9): # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28591) Backport HBASE-26123 Restore fields dropped by HBASE-25986 to public interfaces
[ https://issues.apache.org/jira/browse/HBASE-28591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849454#comment-17849454 ] Duo Zhang commented on HBASE-28591: --- Why the parent issue was only committed to branch-2.4? And for deprecation, we should mention in which version we plan to remove it. Thanks. > Backport HBASE-26123 Restore fields dropped by HBASE-25986 to public > interfaces > --- > > Key: HBASE-28591 > URL: https://issues.apache.org/jira/browse/HBASE-28591 > Project: HBase > Issue Type: Sub-task >Reporter: Szucs Villo >Assignee: Szucs Villo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849451#comment-17849451 ] Duo Zhang edited comment on HBASE-28474 at 5/25/24 10:39 AM: - # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email was (Author: apache9): # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-28474: - Assignee: Duo Zhang > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28474: -- Description: # Release the artifacts on repository.apache.org # Move the binaries from dist-dev to dist-release # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 (/) # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 # Add release data on https://reporter.apache.org/addrelease.html?hbase # Send announcement email was: # Release the artifacts on repository.apache.org # Move the binaries from dist-dev to dist-release # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 # Add release data on https://reporter.apache.org/addrelease.html?hbase # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 (/) > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28473: --- Labels: pull-request-available (was: ) > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849451#comment-17849451 ] Duo Zhang commented on HBASE-28474: --- # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HBASE-28474) Finish 2.4.18 release
[ https://issues.apache.org/jira/browse/HBASE-28474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849451#comment-17849451 ] Duo Zhang edited comment on HBASE-28474 at 5/25/24 10:29 AM: - # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 (/) # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email was (Author: apache9): # Release the artifacts on repository.apache.org (/) # Move the binaries from dist-dev to dist-release (/) # Add xml to download page(via HBASE-28473) # Push tag 2.4.18RCx as tag rel/2.4.18 # Release 2.4.18 on JIRA https://issues.apache.org/jira/projects/HBASE/versions/12353080 # Add release data on https://reporter.apache.org/addrelease.html?hbase (/) # Send announcement email > Finish 2.4.18 release > - > > Key: HBASE-28474 > URL: https://issues.apache.org/jira/browse/HBASE-28474 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Priority: Major > > # Release the artifacts on repository.apache.org > # Move the binaries from dist-dev to dist-release > # Add xml to download page(via HBASE-28473) > # Push tag 2.4.18RCx as tag rel/2.4.18 > # Release 2.4.18 on JIRA > https://issues.apache.org/jira/projects/HBASE/versions/12353080 > # Add release data on https://reporter.apache.org/addrelease.html?hbase > # Send announcement email -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-28473: - Assignee: Duo Zhang > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-28473 started by Duo Zhang. - > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28472) Put up 2.4.18RC0
[ https://issues.apache.org/jira/browse/HBASE-28472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-28472. --- Resolution: Fixed Done. > Put up 2.4.18RC0 > > > Key: HBASE-28472 > URL: https://issues.apache.org/jira/browse/HBASE-28472 > Project: HBase > Issue Type: Sub-task > Components: community >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28473) Add 2.4.18 to download page
[ https://issues.apache.org/jira/browse/HBASE-28473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-28473: -- Component/s: website > Add 2.4.18 to download page > --- > > Key: HBASE-28473 > URL: https://issues.apache.org/jira/browse/HBASE-28473 > Project: HBase > Issue Type: Sub-task > Components: website >Reporter: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28612) replication quota leak when peer changes
[ https://issues.apache.org/jira/browse/HBASE-28612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangxin updated HBASE-28612: Description: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop when peer changes. Not cleaning buffer usage. When the amount of data written to the table in the peer is relatively large, the quota is already full and has not been released, resulting in the wall reader being unable to read new data. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B was: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop when peer changes. Not cleaning buffer usage. When the amount of data written to the table in the peer is relatively large, the quota is already full and has not been released, resulting in the wall reader being unable to read new data. The log is as follows: 2024-05-20 20:00:00,796 WARN [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer usage. Shipper alive: peer1; Reader alive: false 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as buffer usage 268435456B exceeds limit 268435456B > replication quota leak when peer changes > > > Key: HBASE-28612 > URL: https://issues.apache.org/jira/browse/HBASE-28612 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: wangxin >Priority: Major > Labels: pull-request-available > > Shipper clearWALEntryBatch method timed out whilst waiting reader/shipper > thread to stop when peer changes. Not cleaning buffer usage. When the amount > of data written to the table in the peer is relatively large, the quota is > already full and has not been released, resulting in the wall reader being > unable to read new data. > The log is as follows: > 2024-05-20 20:00:00,796 WARN > [RpcServer.default.FPRWQ.Fifo.read.handler=70,queue=1,port=16020] > regionserver.ReplicationSourceShipper: Shipper clearWALEntryBatch method > timed out whilst waiting reader/shipper thread to stop. Not cleaning buffer > usage. Shipper alive: peer1; Reader alive: false > 2024-05-20 20:00:01,351 WARN peer=peer1, can't read more edits from WAL as > buffer usage 268435456B exceeds limit 268435456B > -- This message was sent by Atlassian Jira (v8.20.10#820010)