[jira] [Reopened] (HBASE-27904) A random data generator tool leveraging bulk load.

2023-07-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reopened HBASE-27904:
--

re-opening for branch-2 backport

> A random data generator tool leveraging bulk load.
> --
>
> Key: HBASE-27904
> URL: https://issues.apache.org/jira/browse/HBASE-27904
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Reporter: Himanshu Gwalani
>Assignee: Himanshu Gwalani
>Priority: Major
> Fix For: 3.0.0-beta-1
>
>
> As of now, there is no data generator tool in HBase leveraging bulk load. 
> Since bulk load skips client writes path, it's much faster to generate data 
> and use of for load/performance tests where client writes are not a mandate.
> {*}Example{*}: Any tooling over HBase that need x TBs of HBase Table for load 
> testing.
> {*}Requirements{*}:
> 1. Tooling should generate RANDOM data on the fly and should not require any 
> pre-generated data as CSV/XML files as input.
> 2. Tooling should support pre-splited tables (number of splits to be taken as 
> input).
> 3. Data should be UNIFORMLY distributed across all regions of the table.
> *High-level Steps*
> 1. A table will be created (pre-splited with number of splits as input)
> 2. The mapper of a custom Map Reduce job will generate random key-value pair 
> and ensure that those are equally distributed across all regions of the table.
> 3. 
> [HFileOutputFormat2|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java]
>  will be used to add reducer to the MR job and create HFiles based on key 
> value pairs generated by mapper. 
> 4. Bulk load those HFiles to the respective regions of the table using 
> [LoadIncrementalFiles|https://hbase.apache.org/2.2/devapidocs/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.html]
> *Results*
> We had POC for this tool in our organization, tested this tool with a 11 
> nodes HBase cluster (having HBase + Hadoop services running). The tool 
> generated:
> 1. *100* *GB* of data in *6 minutes*
> 2. *340 GB* of data in *13 minutes*
> 3. *3.5 TB* of data in *3 hours and 10 minutes*
> *Usage*
> hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool 
> -mapper-count 100 -table TEST_TABLE_1 -rows-per-mapper 100 -split-count 
> 100 -delete-if-exist -table-options "NORMALIZATION_ENABLED=false"
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27904) A random data generator tool leveraging bulk load.

2023-07-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-27904.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> A random data generator tool leveraging bulk load.
> --
>
> Key: HBASE-27904
> URL: https://issues.apache.org/jira/browse/HBASE-27904
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Reporter: Himanshu Gwalani
>Assignee: Himanshu Gwalani
>Priority: Major
> Fix For: 2.6.0, 3.0.0-beta-1
>
>
> As of now, there is no data generator tool in HBase leveraging bulk load. 
> Since bulk load skips client writes path, it's much faster to generate data 
> and use of for load/performance tests where client writes are not a mandate.
> {*}Example{*}: Any tooling over HBase that need x TBs of HBase Table for load 
> testing.
> {*}Requirements{*}:
> 1. Tooling should generate RANDOM data on the fly and should not require any 
> pre-generated data as CSV/XML files as input.
> 2. Tooling should support pre-splited tables (number of splits to be taken as 
> input).
> 3. Data should be UNIFORMLY distributed across all regions of the table.
> *High-level Steps*
> 1. A table will be created (pre-splited with number of splits as input)
> 2. The mapper of a custom Map Reduce job will generate random key-value pair 
> and ensure that those are equally distributed across all regions of the table.
> 3. 
> [HFileOutputFormat2|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java]
>  will be used to add reducer to the MR job and create HFiles based on key 
> value pairs generated by mapper. 
> 4. Bulk load those HFiles to the respective regions of the table using 
> [LoadIncrementalFiles|https://hbase.apache.org/2.2/devapidocs/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.html]
> *Results*
> We had POC for this tool in our organization, tested this tool with a 11 
> nodes HBase cluster (having HBase + Hadoop services running). The tool 
> generated:
> 1. *100* *GB* of data in *6 minutes*
> 2. *340 GB* of data in *13 minutes*
> 3. *3.5 TB* of data in *3 hours and 10 minutes*
> *Usage*
> hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool 
> -mapper-count 100 -table TEST_TABLE_1 -rows-per-mapper 100 -split-count 
> 100 -delete-if-exist -table-options "NORMALIZATION_ENABLED=false"
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] virajjasani merged pull request #5294: HBASE-27904: A random data generator tool leveraging hbase bulk load

2023-07-26 Thread via GitHub


virajjasani merged PR #5294:
URL: https://github.com/apache/hbase/pull/5294


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-27805) The chunk created by mslab may cause memory fragement and lead to fullgc

2023-07-26 Thread Zheng Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747710#comment-17747710
 ] 

Zheng Wang edited comment on HBASE-27805 at 7/27/23 2:07 AM:
-

In this issue, we just updated the doc and provided a way to workaround.


was (Author: filtertip):
In this issue, we just updated the documentation and provided a way to 
workaround.

> The chunk created by mslab may cause memory fragement and lead to fullgc
> 
>
> Key: HBASE-27805
> URL: https://issues.apache.org/jira/browse/HBASE-27805
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Attachments: chunksize-2047k.png, chunksize-2048k-fullgc.png
>
>
> The default size of chunk is 2m, when we use G1, if heapRegionSize equals 4m, 
> these chunks are allocated as humongous objects, exclusively allocating one 
> region, then the remaining 2m become memory fragement.
> Lots of memory fragement may lead to fullgc even if the percent of used heap 
> not high enough.
> I have tested to reduce the chunk size to 2047k(2m-1k, a bit lesser than half 
> of heapRegionSize), there was no repeat of the above.
> BTW, in G1, humongous objects are objects larger or equal the size of half a 
> region, and the heapRegionSize is automatically calculated based on the heap 
> size parameter if not explicitly specified.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27805) The chunk created by mslab may cause memory fragement and lead to fullgc

2023-07-26 Thread Zheng Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747710#comment-17747710
 ] 

Zheng Wang commented on HBASE-27805:


In this issue, we just updated the documentation and provided a way to 
workaround.

> The chunk created by mslab may cause memory fragement and lead to fullgc
> 
>
> Key: HBASE-27805
> URL: https://issues.apache.org/jira/browse/HBASE-27805
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Attachments: chunksize-2047k.png, chunksize-2048k-fullgc.png
>
>
> The default size of chunk is 2m, when we use G1, if heapRegionSize equals 4m, 
> these chunks are allocated as humongous objects, exclusively allocating one 
> region, then the remaining 2m become memory fragement.
> Lots of memory fragement may lead to fullgc even if the percent of used heap 
> not high enough.
> I have tested to reduce the chunk size to 2047k(2m-1k, a bit lesser than half 
> of heapRegionSize), there was no repeat of the above.
> BTW, in G1, humongous objects are objects larger or equal the size of half a 
> region, and the heapRegionSize is automatically calculated based on the heap 
> size parameter if not explicitly specified.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27805) The chunk created by mslab may cause memory fragement and lead to fullgc

2023-07-26 Thread Zheng Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Wang resolved HBASE-27805.

Resolution: Fixed

> The chunk created by mslab may cause memory fragement and lead to fullgc
> 
>
> Key: HBASE-27805
> URL: https://issues.apache.org/jira/browse/HBASE-27805
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Attachments: chunksize-2047k.png, chunksize-2048k-fullgc.png
>
>
> The default size of chunk is 2m, when we use G1, if heapRegionSize equals 4m, 
> these chunks are allocated as humongous objects, exclusively allocating one 
> region, then the remaining 2m become memory fragement.
> Lots of memory fragement may lead to fullgc even if the percent of used heap 
> not high enough.
> I have tested to reduce the chunk size to 2047k(2m-1k, a bit lesser than half 
> of heapRegionSize), there was no repeat of the above.
> BTW, in G1, humongous objects are objects larger or equal the size of half a 
> region, and the heapRegionSize is automatically calculated based on the heap 
> size parameter if not explicitly specified.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27805) The chunk created by mslab may cause memory fragement and lead to fullgc

2023-07-26 Thread Zheng Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Wang updated HBASE-27805:
---
Component/s: documentation
 (was: regionserver)

> The chunk created by mslab may cause memory fragement and lead to fullgc
> 
>
> Key: HBASE-27805
> URL: https://issues.apache.org/jira/browse/HBASE-27805
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Attachments: chunksize-2047k.png, chunksize-2048k-fullgc.png
>
>
> The default size of chunk is 2m, when we use G1, if heapRegionSize equals 4m, 
> these chunks are allocated as humongous objects, exclusively allocating one 
> region, then the remaining 2m become memory fragement.
> Lots of memory fragement may lead to fullgc even if the percent of used heap 
> not high enough.
> I have tested to reduce the chunk size to 2047k(2m-1k, a bit lesser than half 
> of heapRegionSize), there was no repeat of the above.
> BTW, in G1, humongous objects are objects larger or equal the size of half a 
> region, and the heapRegionSize is automatically calculated based on the heap 
> size parameter if not explicitly specified.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


bbeaudreault commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275492233


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -179,9 +198,12 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 if (versions >= 0) {
   scan.readVersions(versions);
 }
+int reCompareThreads = conf.getInt(NAME + ".reCompareThreads", 1);
+reCompareExecutor = buildReCompareExecutor(reCompareThreads, context);

Review Comment:
   So, in my experience, typically you'd put all of the setup code in a 
`setup(Context context)` method. That would be called just once at startup 
before any map calls are made (there's also a setup for reducers).
   
   But here, for whatever reason, they didn't do that. But they effectively 
have the same thing -- this code is in a block wrapped by `if 
(replicatedScanner == null) {`. So everything from here down to the start of 
the `while(true)` below will only be executed once.
   
   Perhaps more directly to your question, cleanup is the inverse of setup. So 
it only happens at the end of the mapper, when all inputs have been processed 
or if the job fails before that. So we shouldn't have a case where cleanup is 
called, and then map is called again. Meaning its ok to keep the executor 
creation here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


bbeaudreault commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275487795


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplicationRecompareRunnable.java:
##
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce.replication;
+
+import java.io.IOException;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class VerifyReplicationRecompareRunnable implements Runnable {
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(VerifyReplicationRecompareRunnable.class);
+
+  private final Mapper.Context context;
+  private final VerifyReplication.Verifier.Counters originalCounter;
+  private final String delimiter;
+  private final byte[] row;
+  private final Scan tableScan;
+  private final Table sourceTable;
+  private final Table replicatedTable;
+
+  private final int reCompareTries;
+  private final int sleepMsBeforeReCompare;
+  private final int reCompareBackoffExponent;
+  private final boolean verbose;
+
+  private Result sourceResult;
+  private Result replicatedResult;
+
+  public VerifyReplicationRecompareRunnable(Mapper.Context context, Result 
sourceResult,
+Result replicatedResult, VerifyReplication.Verifier.Counters 
originalCounter, String delimiter,
+Scan tableScan, Table sourceTable, Table replicatedTable, int 
reCompareTries,
+int sleepMsBeforeReCompare, int reCompareBackoffExponent, boolean verbose) 
{
+this.context = context;
+this.sourceResult = sourceResult;
+this.replicatedResult = replicatedResult;
+this.originalCounter = originalCounter;
+this.delimiter = delimiter;
+this.tableScan = tableScan;
+this.sourceTable = sourceTable;
+this.replicatedTable = replicatedTable;
+this.reCompareTries = reCompareTries;
+this.sleepMsBeforeReCompare = sleepMsBeforeReCompare;
+this.reCompareBackoffExponent = reCompareBackoffExponent;
+this.verbose = verbose;
+this.row = VerifyReplication.getRow(sourceResult, replicatedResult);
+  }
+
+  @Override
+  public void run() {
+Get get = new Get(row);
+get.setCacheBlocks(tableScan.getCacheBlocks());
+get.setFilter(tableScan.getFilter());
+
+int sleepMs = sleepMsBeforeReCompare;
+int tries = 0;
+
+while (++tries <= reCompareTries) {
+  
context.getCounter(VerifyReplication.Verifier.Counters.RE_COMPARES).increment(1);
+
+  try {
+Thread.sleep(sleepMs);
+  } catch (InterruptedException e) {
+LOG.warn("Sleeping interrupted, incrementing bad rows and aborting");
+incrementOriginalAndBadCounter();
+Thread.currentThread().interrupt();

Review Comment:
   i think it should be both. my reasoning is partially driven by our own usage 
of the job -- our wrapper checks the value of BADROWS and fails if > 0. Of 
course, we can also check for failed recompares, but I think there's something 
nice about having BADROWS be the conclusive determiner of whether there is 
anything to dig into. The other counters are more for extra context.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652452623

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 37s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 16s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  7s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 31s |  hbase-protocol-shaded in the patch 
passed.  |
   | -1 :x: |  unit  |   6m 41s |  hbase-client in the patch failed.  |
   | -1 :x: |  unit  |  10m  4s |  hbase-server in the patch failed.  |
   | -1 :x: |  unit  |  15m 50s |  hbase-mapreduce in the patch failed.  |
   | -1 :x: |  unit  |   1m 32s |  hbase-thrift in the patch failed.  |
   | -1 :x: |  unit  |   2m 21s |  hbase-endpoint in the patch failed.  |
   |  |   |  66m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 85a79f7f1b9c 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-client.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-mapreduce.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-thrift.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-endpoint.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/testReport/
 |
   | Max. process+thread count | 3122 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652442435

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 42s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 43s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 45s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 41s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  hbase-protocol-shaded in the patch 
passed.  |
   | -1 :x: |  unit  |   6m  6s |  hbase-client in the patch failed.  |
   | -1 :x: |  unit  |  11m 30s |  hbase-server in the patch failed.  |
   | -1 :x: |  unit  |   9m 37s |  hbase-mapreduce in the patch failed.  |
   | -1 :x: |  unit  |   1m 32s |  hbase-thrift in the patch failed.  |
   | -1 :x: |  unit  |   2m  2s |  hbase-endpoint in the patch failed.  |
   |  |   |  58m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 81937d008218 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-client.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-mapreduce.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-thrift.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-endpoint.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/testReport/
 |
   | Max. process+thread count | 3883 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652434316

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   5m 16s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 39s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 39s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   5m 41s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  0s |  the patch passed  |
   | +1 :green_heart: |  cc  |   5m  0s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m  0s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  13m 40s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.5.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  spotless  |   0m 39s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   6m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux c06468757652 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 
24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 82 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275418457


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplicationRecompareRunnable.java:
##
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce.replication;
+
+import java.io.IOException;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class VerifyReplicationRecompareRunnable implements Runnable {
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(VerifyReplicationRecompareRunnable.class);
+
+  private final Mapper.Context context;
+  private final VerifyReplication.Verifier.Counters originalCounter;
+  private final String delimiter;
+  private final byte[] row;
+  private final Scan tableScan;
+  private final Table sourceTable;
+  private final Table replicatedTable;
+
+  private final int reCompareTries;
+  private final int sleepMsBeforeReCompare;
+  private final int reCompareBackoffExponent;
+  private final boolean verbose;
+
+  private Result sourceResult;
+  private Result replicatedResult;
+
+  public VerifyReplicationRecompareRunnable(Mapper.Context context, Result 
sourceResult,
+Result replicatedResult, VerifyReplication.Verifier.Counters 
originalCounter, String delimiter,
+Scan tableScan, Table sourceTable, Table replicatedTable, int 
reCompareTries,
+int sleepMsBeforeReCompare, int reCompareBackoffExponent, boolean verbose) 
{
+this.context = context;
+this.sourceResult = sourceResult;
+this.replicatedResult = replicatedResult;
+this.originalCounter = originalCounter;
+this.delimiter = delimiter;
+this.tableScan = tableScan;
+this.sourceTable = sourceTable;
+this.replicatedTable = replicatedTable;
+this.reCompareTries = reCompareTries;
+this.sleepMsBeforeReCompare = sleepMsBeforeReCompare;
+this.reCompareBackoffExponent = reCompareBackoffExponent;
+this.verbose = verbose;
+this.row = VerifyReplication.getRow(sourceResult, replicatedResult);
+  }
+
+  @Override
+  public void run() {
+Get get = new Get(row);
+get.setCacheBlocks(tableScan.getCacheBlocks());
+get.setFilter(tableScan.getFilter());
+
+int sleepMs = sleepMsBeforeReCompare;
+int tries = 0;
+
+while (++tries <= reCompareTries) {
+  
context.getCounter(VerifyReplication.Verifier.Counters.RE_COMPARES).increment(1);
+
+  try {
+Thread.sleep(sleepMs);
+  } catch (InterruptedException e) {
+LOG.warn("Sleeping interrupted, incrementing bad rows and aborting");
+incrementOriginalAndBadCounter();
+Thread.currentThread().interrupt();

Review Comment:
   Do you think it'd make sense to increment FAILED_RECOMPARE instead of 
BADROWS or in addition to it? I think doing both might be a bit misleading but 
I'm not totally convinced one way or the other



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652346256

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 37s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 30s |  hbase-protocol-shaded in the patch 
passed.  |
   | -1 :x: |  unit  |   6m 41s |  hbase-client in the patch failed.  |
   | -1 :x: |  unit  |  12m 36s |  hbase-server in the patch failed.  |
   | -1 :x: |  unit  |   0m 53s |  hbase-mapreduce in the patch failed.  |
   | -1 :x: |  unit  |   7m 34s |  hbase-thrift in the patch failed.  |
   | -1 :x: |  unit  |   3m 49s |  hbase-endpoint in the patch failed.  |
   |  |   |  58m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 96146c156c97 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-client.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-mapreduce.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-thrift.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-endpoint.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/testReport/
 |
   | Max. process+thread count | 1483 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652344493

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 42s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 40s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  hbase-protocol-shaded in the patch 
passed.  |
   | -1 :x: |  unit  |   6m  6s |  hbase-client in the patch failed.  |
   | -1 :x: |  unit  |  12m  2s |  hbase-server in the patch failed.  |
   | -1 :x: |  unit  |   0m 51s |  hbase-mapreduce in the patch failed.  |
   | -1 :x: |  unit  |   7m 29s |  hbase-thrift in the patch failed.  |
   | -1 :x: |  unit  |   2m 56s |  hbase-endpoint in the patch failed.  |
   |  |   |  57m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5bc8c90ffbea 5.4.0-152-generic #169-Ubuntu SMP Tue Jun 6 
22:23:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-client.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-mapreduce.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-thrift.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-endpoint.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/testReport/
 |
   | Max. process+thread count | 1961 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


Apache-HBase commented on PR #5332:
URL: https://github.com/apache/hbase/pull/5332#issuecomment-1652338570

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   5m  6s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 40s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   5m 41s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  cc  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  13m 34s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.5.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   6m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5332 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 9657187dad59 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 
24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / cfa3f13b5d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 82 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-mapreduce hbase-thrift hbase-endpoint U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5332/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


bbeaudreault commented on code in PR #5332:
URL: https://github.com/apache/hbase/pull/5332#discussion_r1275354222


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java:
##
@@ -218,6 +221,14 @@ public  AsyncRequestFuture 
submit(AsyncProcessTask task)
 }
   }
 
+  public Map getRequestAttributes() {
+return requestAttributes;
+  }
+
+  public void setRequestAttributes(Map requestAttributes) {
+this.requestAttributes = requestAttributes;
+  }

Review Comment:
   This is not thread safe though. Every HTable shares the same AsyncProcess. I 
think you'd add it to the AsyncProcessTask



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] rmdmattingly commented on a diff in pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


rmdmattingly commented on code in PR #5332:
URL: https://github.com/apache/hbase/pull/5332#discussion_r1275341073


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java:
##
@@ -303,7 +327,7 @@ public static CompletableFuture 
createAsyncConnection(Configura
 try {
   future.complete(
 user.runAs((PrivilegedExceptionAction) 
() -> ReflectionUtils
-  .newInstance(clazz, conf, registry, clusterId, user)));
+  .newInstance(clazz, conf, registry, clusterId, null, user, 
connectionAttributes)));

Review Comment:
   The diff looks off here, will take another look



##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java:
##
@@ -218,6 +221,14 @@ public  AsyncRequestFuture 
submit(AsyncProcessTask task)
 }
   }
 
+  public Map getRequestAttributes() {
+return requestAttributes;
+  }
+
+  public void setRequestAttributes(Map requestAttributes) {
+this.requestAttributes = requestAttributes;
+  }

Review Comment:
   This is the part of this changeset that I like least. Ideally the 
AsyncProcess wouldn't be mutable like this, but because of the abnormal way in 
which multigets reuse an already constructed AsyncProcess I think this is 
necessary



##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java:
##
@@ -142,7 +142,9 @@ public class BufferedMutatorImpl implements BufferedMutator 
{
 RpcControllerFactory rpcFactory, BufferedMutatorParams params) {
 this(conn, params,
   // puts need to track errors globally due to how the APIs currently work.
-  new AsyncProcess(conn, conn.getConfiguration(), rpcCallerFactory, 
rpcFactory));
+  // todo rmattingly support buffered mutator request attributes
+  new AsyncProcess(conn, conn.getConfiguration(), rpcCallerFactory, 
rpcFactory,
+Collections.emptyMap()));

Review Comment:
   Still need to do this, will do before I mark as ready for review



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] rmdmattingly commented on a diff in pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


rmdmattingly commented on code in PR #5332:
URL: https://github.com/apache/hbase/pull/5332#discussion_r1275322283


##
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java:
##
@@ -91,4 +107,9 @@ public static RpcControllerFactory instantiate(Configuration 
configuration) {
   return new RpcControllerFactory(configuration);
 }
   }
+
+  public RpcControllerFactory setRequestAttributes(Map 
requestAttributes) {

Review Comment:
   Still need to self review, but I totally revisited the approach with this 
feedback in mind. thanks again!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275305844


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -179,9 +198,12 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 if (versions >= 0) {
   scan.readVersions(versions);
 }
+int reCompareThreads = conf.getInt(NAME + ".reCompareThreads", 1);
+reCompareExecutor = buildReCompareExecutor(reCompareThreads, context);

Review Comment:
   I'm not entirely certain when `cleanup` gets called. Does it make sense to 
only build the reCompareExecutor if it is null? Or do we want to build a new 
executor on every call to `map`? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275305133


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -179,9 +198,12 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 if (versions >= 0) {
   scan.readVersions(versions);
 }
+int reCompareThreads = conf.getInt(NAME + ".reCompareThreads", 1);
+reCompareExecutor = buildReCompareExecutor(reCompareThreads, context);

Review Comment:
   I'm not entirely certain when `cleanup` gets called. Does it make sense to 
only build the reCompareExecutor if it is null? Or do we want to build a new 
executor on every call to `map`? 
   
   Especially b/c the executor is a static member variable



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275305133


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -179,9 +198,12 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 if (versions >= 0) {
   scan.readVersions(versions);
 }
+int reCompareThreads = conf.getInt(NAME + ".reCompareThreads", 1);
+reCompareExecutor = buildReCompareExecutor(reCompareThreads, context);

Review Comment:
   I'm not entirely certain when `cleanup` gets called. Does it make sense to 
only build the reCompareExecutor if it is null? Or do we want to build a new 
executor on every call to `map`? 
   
   Especially b/c the executor is a static member variable



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275304472


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -788,6 +852,23 @@ private static void printUsage(final String errorMsg) {
 + "2181:/cluster-b \\\n" + " TestTable");
   }
 
+  private static ExecutorService buildReCompareExecutor(int maxThreads, 
Mapper.Context context) {

Review Comment:
   I'm not entirely certain when `cleanup` gets called. Does it make sense to 
only build the reCompareExecutor if it is null? Or do we want to build a new 
executor on every call to `map`?



##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -788,6 +852,23 @@ private static void printUsage(final String errorMsg) {
 + "2181:/cluster-b \\\n" + " TestTable");
   }
 
+  private static ExecutorService buildReCompareExecutor(int maxThreads, 
Mapper.Context context) {

Review Comment:
   I'm not entirely certain when `cleanup` gets called. Does it make sense to 
only build the reCompareExecutor if it is null? Or do we want to build a new 
executor on every call to `map`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] hgromer commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


hgromer commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1275301707


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -240,55 +262,47 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 "Good row key: " + delimiter + 
Bytes.toStringBinary(value.getRow()) + delimiter);
 }
   } catch (Exception e) {
-logFailRowAndIncreaseCounter(context, 
Counters.CONTENT_DIFFERENT_ROWS, value);
+logFailRowAndIncreaseCounter(context, 
Counters.CONTENT_DIFFERENT_ROWS, value,
+  currentCompareRowInPeerTable);
   }
   currentCompareRowInPeerTable = replicatedScanner.next();
   break;
 } else if (rowCmpRet < 0) {
   // row only exists in source table
-  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
+  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_SOURCE_TABLE_ROWS, value, null);
   break;
 } else {
   // row only exists in peer table
-  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_PEER_TABLE_ROWS,
+  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_PEER_TABLE_ROWS, null,
 currentCompareRowInPeerTable);
   currentCompareRowInPeerTable = replicatedScanner.next();
 }
   }
 }
 
-private void logFailRowAndIncreaseCounter(Context context, Counters 
counter, Result row) {
-  if (sleepMsBeforeReCompare > 0) {
-Threads.sleep(sleepMsBeforeReCompare);
-try {
-  Result sourceResult = sourceTable.get(new Get(row.getRow()));
-  Result replicatedResult = replicatedTable.get(new Get(row.getRow()));
-  Result.compareResults(sourceResult, replicatedResult, false);
-  if (!sourceResult.isEmpty()) {
-context.getCounter(Counters.GOODROWS).increment(1);
-if (verbose) {
-  LOG.info("Good row key (with recompare): " + delimiter
-+ Bytes.toStringBinary(row.getRow()) + delimiter);
-}
-  }
-  return;
-} catch (Exception e) {
-  LOG.error("recompare fail after sleep, rowkey=" + delimiter
-+ Bytes.toStringBinary(row.getRow()) + delimiter);
-}
+@SuppressWarnings("FutureReturnValueIgnored")
+private void logFailRowAndIncreaseCounter(Context context, Counters 
counter, Result row,
+  Result replicatedRow) {
+  if (reCompareTries > 0 && sleepMsBeforeReCompare > 0) {

Review Comment:
   I totally missed this -- yeah we should definitely set the default value to 
1 when parsing the config



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


bbeaudreault commented on code in PR #5332:
URL: https://github.com/apache/hbase/pull/5332#discussion_r1275281402


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java:
##
@@ -124,12 +123,12 @@ public class AsyncConnectionImpl implements 
AsyncConnection {
   private final ClusterStatusListener clusterStatusListener;
 
   public AsyncConnectionImpl(Configuration conf, ConnectionRegistry registry, 
String clusterId,
-SocketAddress localAddress, User user) {
-this(conf, registry, clusterId, localAddress, user, 
Collections.emptyMap());
+User user) {
+this(conf, registry, clusterId, user, Collections.emptyMap());

Review Comment:
   whoops, you're right. i was tricked by trying to just look at the specific 
commits.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] rmdmattingly commented on a diff in pull request #5332: [branch-2] HBASE-27657: Connection and Request Attributes

2023-07-26 Thread via GitHub


rmdmattingly commented on code in PR #5332:
URL: https://github.com/apache/hbase/pull/5332#discussion_r1275210426


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java:
##
@@ -145,8 +144,8 @@ public AsyncConnectionImpl(Configuration conf, 
ConnectionRegistry registry, Stri
 } else {
   this.metrics = Optional.empty();
 }
-this.rpcClient = RpcClientFactory.createClient(conf, clusterId, 
localAddress,
-  metrics.orElse(null), connectionAttributes);
+this.rpcClient =
+  RpcClientFactory.createClient(conf, clusterId, metrics.orElse(null), 
connectionAttributes);

Review Comment:
   see other comment



##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java:
##
@@ -124,12 +123,12 @@ public class AsyncConnectionImpl implements 
AsyncConnection {
   private final ClusterStatusListener clusterStatusListener;
 
   public AsyncConnectionImpl(Configuration conf, ConnectionRegistry registry, 
String clusterId,
-SocketAddress localAddress, User user) {
-this(conf, registry, clusterId, localAddress, user, 
Collections.emptyMap());
+User user) {
+this(conf, registry, clusterId, user, Collections.emptyMap());

Review Comment:
   The diff across commits is sort of confusing because this class is different 
across branch-2 and master. [The branch-2 
version](https://github.com/apache/hbase/blob/branch-2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java#L123-L124)
 does not have a SocketAddress argument. [The master 
version](https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java#L130-L131)
 does. So I think this is the correct changeset for the branch-2 backport



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27451) Setup nightly job for s390x node

2023-07-26 Thread Jonathan Albrecht (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747545#comment-17747545
 ] 

Jonathan Albrecht commented on HBASE-27451:
---

[~zhangduo] the s390x nightly builds are no longer hanging. Thanks again for 
all your help.

The build nodes do seem to be having some performance problems that are 
unrelated to this issue. I think we can close this issue and any performance 
problems can be handled separately.

> Setup nightly job for s390x node
> 
>
> Key: HBASE-27451
> URL: https://issues.apache.org/jira/browse/HBASE-27451
> Project: HBase
>  Issue Type: Sub-task
>  Components: community, jenkins
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5051: HBASE-26874 VerifyReplication recompare async

2023-07-26 Thread via GitHub


bbeaudreault commented on code in PR #5051:
URL: https://github.com/apache/hbase/pull/5051#discussion_r1274933900


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -705,6 +762,7 @@ private static void printUsage(final String errorMsg) {
 }
 System.err.println("Usage: verifyrep [--starttime=X]"
   + " [--endtime=Y] [--families=A] [--row-prefixes=B] [--delimiter=] 
[--recomparesleep=] "

Review Comment:
   i dont love how this is recomparesleep, but your new options are 
reCompareFoo. It would be nice to be consistent, and your new ones are easier 
to read. We can't simply change the job to only accept reCompareSleep, but we 
can update the usage here and update the parsing to support both (so accept 
recomparesleep or reCompareSleep)
   
   when parsing recomparesleep, i would add a LOG.warn that it is deprecated 
and will be removed in 4.0.0, prefer reCompareSleep.
   
   finally, this is a nitpick but `recompare` is a word on its own... so imo it 
should be `recompareThreads`, `recompareTries`, etc... i.e. don't capitalize 
the C.
   
   so in summary:
   
   - update these new args to be `recompareFoo` rather than `reCompareFoo`
   - update the usage line here to be `recompareSleep` rather than 
`recomparesleep`
   - update arg the parsing above to support both, but print a LOG.warn if the 
old one is used
   - the warn should say "--recomparesleep is deprecated and will be removed in 
4.0.0. Use --recompareSleep instead."



##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -329,6 +343,17 @@ protected void cleanup(Context context) {
   LOG.error("fail to close replicated connection in cleanup", e);
 }
   }
+  if (reCompareExecutor != null && !reCompareExecutor.isShutdown()) {

Review Comment:
   when we get to this point, there may be outstanding recompares. They may not 
finish within the 10s below. I think we need to fail the job if this happens, 
or mark all of those outstanding recompares as BADROWS. Otherwise, the job 
could complete successfully after having skipped over some recompares which. 
may have been bad. This would give the user a false sense of security.



##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java:
##
@@ -240,55 +262,47 @@ public void map(ImmutableBytesWritable row, final Result 
value, Context context)
 "Good row key: " + delimiter + 
Bytes.toStringBinary(value.getRow()) + delimiter);
 }
   } catch (Exception e) {
-logFailRowAndIncreaseCounter(context, 
Counters.CONTENT_DIFFERENT_ROWS, value);
+logFailRowAndIncreaseCounter(context, 
Counters.CONTENT_DIFFERENT_ROWS, value,
+  currentCompareRowInPeerTable);
   }
   currentCompareRowInPeerTable = replicatedScanner.next();
   break;
 } else if (rowCmpRet < 0) {
   // row only exists in source table
-  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
+  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_SOURCE_TABLE_ROWS, value, null);
   break;
 } else {
   // row only exists in peer table
-  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_PEER_TABLE_ROWS,
+  logFailRowAndIncreaseCounter(context, 
Counters.ONLY_IN_PEER_TABLE_ROWS, null,
 currentCompareRowInPeerTable);
   currentCompareRowInPeerTable = replicatedScanner.next();
 }
   }
 }
 
-private void logFailRowAndIncreaseCounter(Context context, Counters 
counter, Result row) {
-  if (sleepMsBeforeReCompare > 0) {
-Threads.sleep(sleepMsBeforeReCompare);
-try {
-  Result sourceResult = sourceTable.get(new Get(row.getRow()));
-  Result replicatedResult = replicatedTable.get(new Get(row.getRow()));
-  Result.compareResults(sourceResult, replicatedResult, false);
-  if (!sourceResult.isEmpty()) {
-context.getCounter(Counters.GOODROWS).increment(1);
-if (verbose) {
-  LOG.info("Good row key (with recompare): " + delimiter
-+ Bytes.toStringBinary(row.getRow()) + delimiter);
-}
-  }
-  return;
-} catch (Exception e) {
-  LOG.error("recompare fail after sleep, rowkey=" + delimiter
-+ Bytes.toStringBinary(row.getRow()) + delimiter);
-}
+@SuppressWarnings("FutureReturnValueIgnored")
+private void logFailRowAndIncreaseCounter(Context context, Counters 
counter, Result row,
+  Result replicatedRow) {
+  if (reCompareTries > 0 && sleepMsBeforeReCompare > 0) {

Review Comment:
   i think we need to fix this -- this change is backwards incompatible. People 
who wanted recompare before will only have set 

[jira] [Commented] (HBASE-27991) [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException

2023-07-26 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747491#comment-17747491
 ] 

Peter Somogyi commented on HBASE-27991:
---

I've added the needed roles for both of you.

> [hbase-examples] MultiThreadedClientExample throws 
> java.lang.ClassCastException
> ---
>
> Key: HBASE-27991
> URL: https://issues.apache.org/jira/browse/HBASE-27991
> Project: HBase
>  Issue Type: Bug
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Minor
>
> Tried using run() method call of 
> [https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java|https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java.]
>  Following the stack trace of error during runtime
> {code:java}
> Exception in thread "main" java.io.IOException: 
> java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:235)
>     at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:160)
>     at 
> org.apache.hadoop.hbase.client.example.MultiThreadedClientExample.run(MultiThreadedClientExample.java:136)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>     at .runMultiThreadedRWOps(xx)
>     at .main(xx)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1780)
>     at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:328)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:232)
>     ... 8 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$null$0(ConnectionFactory.java:233)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>     ... 10 more
> Caused by: java.lang.ClassCastException: java.util.concurrent.ForkJoinPool 
> cannot be cast to java.util.concurrent.ThreadPoolExecutor
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:283)
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:270)
>     ... 17 more{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27991) [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException

2023-07-26 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi reassigned HBASE-27991:
-

Assignee: Nikita Pande

> [hbase-examples] MultiThreadedClientExample throws 
> java.lang.ClassCastException
> ---
>
> Key: HBASE-27991
> URL: https://issues.apache.org/jira/browse/HBASE-27991
> Project: HBase
>  Issue Type: Bug
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Minor
>
> Tried using run() method call of 
> [https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java|https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java.]
>  Following the stack trace of error during runtime
> {code:java}
> Exception in thread "main" java.io.IOException: 
> java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:235)
>     at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:160)
>     at 
> org.apache.hadoop.hbase.client.example.MultiThreadedClientExample.run(MultiThreadedClientExample.java:136)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>     at .runMultiThreadedRWOps(xx)
>     at .main(xx)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1780)
>     at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:328)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:232)
>     ... 8 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$null$0(ConnectionFactory.java:233)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>     ... 10 more
> Caused by: java.lang.ClassCastException: java.util.concurrent.ForkJoinPool 
> cannot be cast to java.util.concurrent.ThreadPoolExecutor
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:283)
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:270)
>     ... 17 more{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] wchevreuil commented on pull request #5307: HBASE-27938 - PE load any custom implementation of tests at runtime

2023-07-26 Thread via GitHub


wchevreuil commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-1651574125

   > Got it, how about we update
   > 
   > `Usage: hbase pe  [-D]*  ` to 
`Usage: hbase pe  [-D]*  `
   > 
   > And add a section for Class just below Commad:
   > 
   > ```
   > Class:
   > To run any custom implementation of PerformanceEvaluation.Test, provide 
the classname of the implementaion class in place of command name and it will 
be loaded at runtime from classpath.
   > Please consider to contribute back this custom test impl into a builtin PE 
command for the benefit of the community.
   > ```
   > 
   > Seems inline with current documentation standard also to me @wchevreuil
   
   Yeah, that would be good. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27991) [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException

2023-07-26 Thread Nihal Jain (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747439#comment-17747439
 ] 

Nihal Jain commented on HBASE-27991:


Thank [~nikitapande]  for reporting and volunteering to contribute a fix. 
Please go ahead and raise a PR with fix, meanwhile we will add you as a 
contributor and assign teh JIRA.

Hi [~chrajeshbab...@gmail.com] I see I still do not have admin access. Could 
you please add her as a contributor and add me as admin so that I can add 
others in future.?

CC: [~zhangduo] , [~ndimiduk] 

> [hbase-examples] MultiThreadedClientExample throws 
> java.lang.ClassCastException
> ---
>
> Key: HBASE-27991
> URL: https://issues.apache.org/jira/browse/HBASE-27991
> Project: HBase
>  Issue Type: Bug
>Reporter: Nikita Pande
>Priority: Minor
>
> Tried using run() method call of 
> [https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java|https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/client/example/MultiThreadedClientExample.java.]
>  Following the stack trace of error during runtime
> {code:java}
> Exception in thread "main" java.io.IOException: 
> java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:235)
>     at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:160)
>     at 
> org.apache.hadoop.hbase.client.example.MultiThreadedClientExample.run(MultiThreadedClientExample.java:136)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>     at .runMultiThreadedRWOps(xx)
>     at .main(xx)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1780)
>     at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:328)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$createConnection$1(ConnectionFactory.java:232)
>     ... 8 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.hadoop.hbase.client.ConnectionFactory.lambda$null$0(ConnectionFactory.java:233)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>     ... 10 more
> Caused by: java.lang.ClassCastException: java.util.concurrent.ForkJoinPool 
> cannot be cast to java.util.concurrent.ThreadPoolExecutor
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:283)
>     at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:270)
>     ... 17 more{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)