[GitHub] [hbase] Apache-HBase commented on pull request #5252: HBASE-27881 The sleep time in checkQuota of replication WAL reader sh…

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5252:
URL: https://github.com/apache/hbase/pull/5252#issuecomment-1560459669

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 57s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 52s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 22s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 33s |  hbase-server: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 16s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | -1 :x: |  spotless  |   0m 32s |  patch has 24 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   1m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  44m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5252/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5252 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 7516a2767cba 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 
24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22526a6339 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5252/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5252/1/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 85 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5252/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27880) Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests

2023-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725630#comment-17725630
 ] 

Hudson commented on HBASE-27880:


Results for branch branch-2
[build #818 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests
> ---
>
> Key: HBASE-27880
> URL: https://issues.apache.org/jira/browse/HBASE-27880
> Project: HBase
>  Issue Type: Task
>  Components: dependabot, scripts, security
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27277) TestRaceBetweenSCPAndTRSP fails in pre commit

2023-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725629#comment-17725629
 ] 

Hudson commented on HBASE-27277:


Results for branch branch-2
[build #818 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/818/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> TestRaceBetweenSCPAndTRSP fails in pre commit
> -
>
> Key: HBASE-27277
> URL: https://issues.apache.org/jira/browse/HBASE-27277
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
> Attachments: 
> org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP-output.txt
>
>
> Seems the PE worker is stuck here. Need dig more.
> {noformat}
> "PEWorker-5" daemon prio=5 tid=326 in Object.wait()
> java.lang.Thread.State: WAITING (on object monitor)
> at java.base@11.0.10/jdk.internal.misc.Unsafe.park(Native Method)
> at 
> java.base@11.0.10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
> at 
> java.base@11.0.10/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
> at 
> app//org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP$AssignmentManagerForTest.getRegionsOnServer(TestRaceBetweenSCPAndTRSP.java:97)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.getRegionsOnCrashedServer(ServerCrashProcedure.java:288)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:195)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:66)
> at 
> app//org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188)
> at 
> app//org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:919)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread$$Lambda$477/0x000800ac1840.call(Unknown
>  Source)
> at 
> app//org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1989)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27277) TestRaceBetweenSCPAndTRSP fails in pre commit

2023-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725627#comment-17725627
 ] 

Hudson commented on HBASE-27277:


Results for branch branch-2.4
[build #564 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> TestRaceBetweenSCPAndTRSP fails in pre commit
> -
>
> Key: HBASE-27277
> URL: https://issues.apache.org/jira/browse/HBASE-27277
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
> Attachments: 
> org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP-output.txt
>
>
> Seems the PE worker is stuck here. Need dig more.
> {noformat}
> "PEWorker-5" daemon prio=5 tid=326 in Object.wait()
> java.lang.Thread.State: WAITING (on object monitor)
> at java.base@11.0.10/jdk.internal.misc.Unsafe.park(Native Method)
> at 
> java.base@11.0.10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
> at 
> java.base@11.0.10/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
> at 
> app//org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP$AssignmentManagerForTest.getRegionsOnServer(TestRaceBetweenSCPAndTRSP.java:97)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.getRegionsOnCrashedServer(ServerCrashProcedure.java:288)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:195)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:66)
> at 
> app//org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188)
> at 
> app//org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:919)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread$$Lambda$477/0x000800ac1840.call(Unknown
>  Source)
> at 
> app//org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1989)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27880) Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests

2023-05-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725628#comment-17725628
 ] 

Hudson commented on HBASE-27880:


Results for branch branch-2.4
[build #564 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/564/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests
> ---
>
> Key: HBASE-27880
> URL: https://issues.apache.org/jira/browse/HBASE-27880
> Project: HBase
>  Issue Type: Task
>  Components: dependabot, scripts, security
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] sunhelly opened a new pull request, #5252: HBASE-27881 The sleep time in checkQuota of replication WAL reader sh…

2023-05-23 Thread via GitHub


sunhelly opened a new pull request, #5252:
URL: https://github.com/apache/hbase/pull/5252

   …ould be controlled independently


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27881) The sleep time in checkQuota of replication WAL reader should be controlled independently

2023-05-23 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-27881:
---
Description: In theory the sleep time when checking quota failed in 
replication WAL reader should match the consume ability of the memory. But at 
the very least we should isolate the configuration here with the sleep time for 
common circumstances when replicating, e.g. sleep before reading from the head 
of the WAL, to avoid a little bit larger but reasonable sleep time(e.g. 3s) can 
make the consume speed always blocked by checking quota or cannot recover the 
consume speed in a very long time(except there exists long time of low source 
WAL production peak).  (was: In theory the sleep time when checking quota 
failed in replication WAL reader should match the consume ability of the 
memory. But at the very least we should isolate the configuration here with the 
sleep time for common circumstances when replicating, e.g. sleep before reading 
from the head of the WAL, to avoid a little bit larger but reasonable sleep 
time(e.g. 3s) making consume speed always blocked by checking quota.)

> The sleep time in checkQuota of replication WAL reader should be controlled 
> independently 
> --
>
> Key: HBASE-27881
> URL: https://issues.apache.org/jira/browse/HBASE-27881
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0-alpha-3, 2.5.4
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
>
> In theory the sleep time when checking quota failed in replication WAL reader 
> should match the consume ability of the memory. But at the very least we 
> should isolate the configuration here with the sleep time for common 
> circumstances when replicating, e.g. sleep before reading from the head of 
> the WAL, to avoid a little bit larger but reasonable sleep time(e.g. 3s) can 
> make the consume speed always blocked by checking quota or cannot recover the 
> consume speed in a very long time(except there exists long time of low source 
> WAL production peak).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27881) The sleep time in checkQuota of replication WAL reader should be controlled independently

2023-05-23 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-27881:
---
Description: In theory the sleep time when checking quota failed in 
replication WAL reader should match the consume ability of the memory. But at 
the very least we should isolate the configuration here with the sleep time for 
common circumstances when replicating, e.g. sleep before reading from the head 
of the WAL, to avoid a little bit larger but reasonable sleep time(e.g. 3s) 
making consume speed always blocked by checking quota.  (was: In theory the 
sleep time when checking quota failed in replication WAL reader should match 
the consume ability of the memory. But at the very least we should isolate the 
configuration here with the sleep time for common circumstances when 
replicating, e.g. sleep before reading from the head of the WAL, to avoid a 
little bit bigger sleep time(e.g. 3s) making consume speed always blocked by 
checking quota.)

> The sleep time in checkQuota of replication WAL reader should be controlled 
> independently 
> --
>
> Key: HBASE-27881
> URL: https://issues.apache.org/jira/browse/HBASE-27881
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0-alpha-3, 2.5.4
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
>
> In theory the sleep time when checking quota failed in replication WAL reader 
> should match the consume ability of the memory. But at the very least we 
> should isolate the configuration here with the sleep time for common 
> circumstances when replicating, e.g. sleep before reading from the head of 
> the WAL, to avoid a little bit larger but reasonable sleep time(e.g. 3s) 
> making consume speed always blocked by checking quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27882) Avoid always reinit the decompressor in the hot read path

2023-05-23 Thread Xiaolin Ha (Jira)
Xiaolin Ha created HBASE-27882:
--

 Summary: Avoid always reinit the decompressor in the hot read path
 Key: HBASE-27882
 URL: https://issues.apache.org/jira/browse/HBASE-27882
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 2.5.4, 3.0.0-alpha-3
Reporter: Xiaolin Ha
Assignee: Xiaolin Ha
 Attachments: image-2023-05-24-11-06-48-569.png

When seting "hbase.block.data.cachecompressed=true", the cached blocks are 
decompressed when reading. But we are using pooled decompressors here, which 
means the decompressor configs should be refreshed as a prepare job before each 
decompressing, see the line here 

[https://github.com/apache/hbase/blob/22526a6339afa230679bcf08fa1c917b04cdac6d/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java#L99]

I have pointed out the lock of Configuration.get problem in HBASE-27672, it 
should be avoid when reiniting in the hot read path either. 

!image-2023-05-24-11-06-48-569.png|width=668,height=286!

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27672) Read RPC threads may BLOCKED at the Configuration.get when using java compression

2023-05-23 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-27672:
---
Description: 
As in the jstack info, we can see some RPC threads or compaction threads 
BLOCKED,

!image-2023-02-27-19-22-52-704.png|width=976,height=355!

  was:
As in the jstack info, we can see some RPC threads or compaction threads BLOCK,

!image-2023-02-27-19-22-52-704.png|width=976,height=355!


> Read RPC threads may BLOCKED at the Configuration.get when using java 
> compression
> -
>
> Key: HBASE-27672
> URL: https://issues.apache.org/jira/browse/HBASE-27672
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.5.3
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4
>
> Attachments: image-2023-02-27-19-22-52-704.png
>
>
> As in the jstack info, we can see some RPC threads or compaction threads 
> BLOCKED,
> !image-2023-02-27-19-22-52-704.png|width=976,height=355!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27881) The sleep time in checkQuota of replication WAL reader should be controlled independently

2023-05-23 Thread Xiaolin Ha (Jira)
Xiaolin Ha created HBASE-27881:
--

 Summary: The sleep time in checkQuota of replication WAL reader 
should be controlled independently 
 Key: HBASE-27881
 URL: https://issues.apache.org/jira/browse/HBASE-27881
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Affects Versions: 2.5.4, 3.0.0-alpha-3
Reporter: Xiaolin Ha
Assignee: Xiaolin Ha


In theory the sleep time when checking quota failed in replication WAL reader 
should match the consume ability of the memory. But at the very least we should 
isolate the configuration here with the sleep time for common circumstances 
when replicating, e.g. sleep before reading from the head of the WAL, to avoid 
a little bit bigger sleep time(e.g. 3s) making consume speed always blocked by 
checking quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5251: HBASE-27876: Only generate SBOM when releasing

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5251:
URL: https://github.com/apache/hbase/pull/5251#issuecomment-1560133466

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 30s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 29s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 358m 33s |  root in the patch passed.  |
   |  |   | 385m 42s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5251 |
   | JIRA Issue | HBASE-27876 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6d3c95aaec3f 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / dc30ca552b |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/testReport/
 |
   | Max. process+thread count | 8053 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5251: HBASE-27876: Only generate SBOM when releasing

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5251:
URL: https://github.com/apache/hbase/pull/5251#issuecomment-1559987405

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 45s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 228m  4s |  root in the patch failed.  |
   |  |   | 256m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5251 |
   | JIRA Issue | HBASE-27876 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 14142e129bba 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 
24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / dc30ca552b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/testReport/
 |
   | Max. process+thread count | 5159 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-23 Thread Pratyush Bhatt (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725544#comment-17725544
 ] 

Pratyush Bhatt commented on HBASE-27877:


Yes, works after that:
{noformat}
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
-Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
-Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
-Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
 -Dfs.defaultFS=ofs://ozone1 table_rcb9lbo3ao 
ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}

Works fine:
{noformat}
2023-05-23 14:31:05,739|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:05 INFO 
impl.YarnClientImpl: Submitted application application_1684781789150_0051
2023-05-23 14:31:05,794|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:05 INFO 
mapreduce.Job: The url to track the job: 
https://ozn-lease16-2.ozn-lease16.root.hwx.site:8090/proxy/application_1684781789150_0051/
2023-05-23 14:31:05,795|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:05 INFO 
mapreduce.Job: Running job: job_1684781789150_0051
2023-05-23 14:31:20,018|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:20 INFO 
mapreduce.Job: Job job_1684781789150_0051 running in uber mode : false
2023-05-23 14:31:20,019|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:20 INFO 
mapreduce.Job:  map 0% reduce 0%
2023-05-23 14:31:32,179|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:32 INFO 
mapreduce.Job:  map 100% reduce 0%
2023-05-23 14:31:46,301|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:46 INFO 
mapreduce.Job:  map 100% reduce 100%
2023-05-23 14:31:47,327|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:47 INFO 
mapreduce.Job: Job job_1684781789150_0051 completed successfully
2023-05-23 14:31:47,457|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|23/05/23 14:31:47 INFO 
mapreduce.Job: Counters: 54
2023-05-23 14:31:47,457|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|File System Counters
2023-05-23 14:31:47,457|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|FILE: Number of bytes read=171
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|FILE: Number of bytes 
written=616435
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|FILE: Number of read 
operations=0
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|FILE: Number of large read 
operations=0
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|FILE: Number of write 
operations=0
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|OFS: Number of bytes read=257
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|OFS: Number of bytes 
written=5617
2023-05-23 14:31:47,458|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|OFS: Number of read 
operations=12
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|OFS: Number of large read 
operations=0
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|OFS: Number of write 
operations=2
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Job Counters
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Launched map tasks=1
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Launched reduce tasks=1
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Rack-local map tasks=1
2023-05-23 14:31:47,459|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Total time spent by all maps 
in occupied slots (ms)=10227
2023-05-23 14:31:47,460|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Total time spent by all 
reduces in occupied slots (ms)=11587
2023-05-23 14:31:47,460|INFO|MainThread|machine.py:203 - 
run()||GUID=f2b9a0c7-f1aa-4923-ab51-f7c9d8a92d09|Total time spent by all map 
tasks (ms)=10227
2023-05-23 

[GitHub] [hbase] frostruan commented on a diff in pull request #5247: HBASE-27855 Support dynamic adjustment of flusher count

2023-05-23 Thread via GitHub


frostruan commented on code in PR #5247:
URL: https://github.com/apache/hbase/pull/5247#discussion_r1202805790


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -924,4 +947,62 @@ public boolean equals(Object obj) {
   return compareTo(other) == 0;
 }
   }
+
+  private int getHandlerCount(Configuration conf) {
+int handlerCount = conf.getInt("hbase.hstore.flusher.count", 2);
+if (handlerCount < 1) {
+  LOG.warn(
+"hbase.hstore.flusher.count was configed to {} which is less than 1, " 
+ "corrected to 1",
+handlerCount);
+  handlerCount = 1;
+}
+return handlerCount;
+  }
+
+  @Override
+  public void onConfigurationChange(Configuration newConf) {
+int newHandlerCount = getHandlerCount(newConf);
+if (newHandlerCount != flushHandlers.length) {
+  LOG.info("update hbase.hstore.flusher.count from {} to {}", 
flushHandlers.length,
+newHandlerCount);
+  lock.writeLock().lock();
+  try {
+FlushHandler[] newFlushHandlers = new FlushHandler[newHandlerCount];
+if (newHandlerCount > flushHandlers.length) {
+  System.arraycopy(flushHandlers, 0, newFlushHandlers, 0, 
flushHandlers.length);

Review Comment:
   Thanks for your suggestion. Will address this in the next commit. Still 
working on fixing the UT.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] frostruan commented on a diff in pull request #5247: HBASE-27855 Support dynamic adjustment of flusher count

2023-05-23 Thread via GitHub


frostruan commented on code in PR #5247:
URL: https://github.com/apache/hbase/pull/5247#discussion_r1202804221


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -924,4 +947,62 @@ public boolean equals(Object obj) {
   return compareTo(other) == 0;
 }
   }
+
+  private int getHandlerCount(Configuration conf) {
+int handlerCount = conf.getInt("hbase.hstore.flusher.count", 2);
+if (handlerCount < 1) {
+  LOG.warn(
+"hbase.hstore.flusher.count was configed to {} which is less than 1, " 
+ "corrected to 1",
+handlerCount);
+  handlerCount = 1;
+}
+return handlerCount;
+  }
+
+  @Override
+  public void onConfigurationChange(Configuration newConf) {
+int newHandlerCount = getHandlerCount(newConf);
+if (newHandlerCount != flushHandlers.length) {
+  LOG.info("update hbase.hstore.flusher.count from {} to {}", 
flushHandlers.length,
+newHandlerCount);
+  lock.writeLock().lock();
+  try {
+FlushHandler[] newFlushHandlers = new FlushHandler[newHandlerCount];
+if (newHandlerCount > flushHandlers.length) {
+  System.arraycopy(flushHandlers, 0, newFlushHandlers, 0, 
flushHandlers.length);
+  startFlushHandlerThreads(newFlushHandlers, flushHandlers.length, 
newFlushHandlers.length);
+} else {
+  System.arraycopy(flushHandlers, 0, newFlushHandlers, 0, 
newFlushHandlers.length);
+  stopFlushHandlerThreads(flushHandlers, newHandlerCount, 
flushHandlers.length);
+}
+flusherIdGen.compareAndSet(flushHandlers.length, 
newFlushHandlers.length);
+this.flushHandlers = newFlushHandlers;
+  } finally {
+lock.writeLock().unlock();
+  }
+}
+  }
+
+  private void startFlushHandlerThreads(FlushHandler[] flushHandlers, int 
start, int end) {
+if (flusherThreadFactory != null) {
+  for (int i = start; i < end; i++) {
+flushHandlers[i] = new FlushHandler("MemStoreFlusher." + 
flusherIdGen.getAndIncrement());
+flusherThreadFactory.newThread(flushHandlers[i]);
+flushHandlers[i].start();
+  }
+}
+  }
+
+  private void stopFlushHandlerThreads(FlushHandler[] flushHandlers, int 
start, int end) {
+for (int i = start; i < end; i++) {
+  flushHandlers[i].shutdown();

Review Comment:
   Yes. I think a flag here is better than interrupting.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725524#comment-17725524
 ] 

Wei-Chiu Chuang edited comment on HBASE-27877 at 5/23/23 6:07 PM:
--

Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.
Maybe the solution is to update hbase reference guide stating that the 
workaround is to add the parameter whenever the error message is seen.


was (Author: jojochuang):
Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 

[jira] [Commented] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725524#comment-17725524
 ] 

Wei-Chiu Chuang commented on HBASE-27877:
-

Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true 
> to disable client-initiated TLS renegotiation
> 2023-05-22 17:01:19,952|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ClientCnxnSocket: 

[GitHub] [hbase] frostruan commented on a diff in pull request #5247: HBASE-27855 Support dynamic adjustment of flusher count

2023-05-23 Thread via GitHub


frostruan commented on code in PR #5247:
URL: https://github.com/apache/hbase/pull/5247#discussion_r1202793846


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -495,41 +505,54 @@ public int getFlushQueueSize() {
* Only interrupt once it's done with a run through the work loop.
*/
   void interruptIfNecessary() {
-lock.writeLock().lock();
+lock.readLock().lock();
 try {
-  for (FlushHandler flushHander : flushHandlers) {
-if (flushHander != null) flushHander.interrupt();
+  for (FlushHandler flushHandler : flushHandlers) {
+if (flushHandler != null) {
+  flushHandler.interrupt();
+}
   }
 } finally {
-  lock.writeLock().unlock();
+  lock.readLock().unlock();
 }
   }
 
   synchronized void start(UncaughtExceptionHandler eh) {
-ThreadFactory flusherThreadFactory = new ThreadFactoryBuilder()
+this.flusherThreadFactory = new ThreadFactoryBuilder()
   .setNameFormat(server.getServerName().toShortString() + 
"-MemStoreFlusher-pool-%d")
   .setDaemon(true).setUncaughtExceptionHandler(eh).build();
-for (int i = 0; i < flushHandlers.length; i++) {
-  flushHandlers[i] = new FlushHandler("MemStoreFlusher." + i);
-  flusherThreadFactory.newThread(flushHandlers[i]);
-  flushHandlers[i].start();
+lock.readLock().lock();
+try {
+  startFlushHandlerThreads(flushHandlers, 0, flushHandlers.length);
+} finally {
+  lock.readLock().unlock();
 }
   }
 
   boolean isAlive() {
-for (FlushHandler flushHander : flushHandlers) {
-  if (flushHander != null && flushHander.isAlive()) {
-return true;
+lock.readLock().lock();
+try {
+  for (FlushHandler flushHandler : flushHandlers) {
+if (flushHandler != null && flushHandler.isAlive()) {
+  return true;
+}
   }
+  return false;
+} finally {
+  lock.readLock().unlock();
 }
-return false;
   }
 
   void join() {

Review Comment:
   I also feel a bit strange about the name of this method. Let me rename it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] frostruan commented on a diff in pull request #5247: HBASE-27855 Support dynamic adjustment of flusher count

2023-05-23 Thread via GitHub


frostruan commented on code in PR #5247:
URL: https://github.com/apache/hbase/pull/5247#discussion_r1202787995


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -495,41 +505,54 @@ public int getFlushQueueSize() {
* Only interrupt once it's done with a run through the work loop.
*/
   void interruptIfNecessary() {
-lock.writeLock().lock();
+lock.readLock().lock();

Review Comment:
   Thanks for reviewing Duo. 
   
   After digging and reconsideration, I think we should keep the writeLock 
here. Sorry for the mistake.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] wchevreuil commented on a diff in pull request #5241: HBASE-27871 Meta replication stuck forever if wal it's still reading gets rolled and deleted

2023-05-23 Thread via GitHub


wchevreuil commented on code in PR #5241:
URL: https://github.com/apache/hbase/pull/5241#discussion_r1202563088


##
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestMetaRegionReplicaReplicationEndpoint.java:
##
@@ -225,6 +227,38 @@ public void 
testCatalogReplicaReplicationWithFlushAndCompaction() throws Excepti
 }
   }
 
+  @Test
+  public void testCatalogReplicaReplicationWALRolledAndDeleted() throws 
Exception {
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+TableName tableName = TableName.valueOf("hbase:meta");
+Table table = connection.getTable(tableName);
+try {
+  MiniHBaseCluster cluster = HTU.getHBaseCluster();
+  HRegionServer hrs = 
cluster.getRegionServer(cluster.getServerHoldingMeta());
+  ReplicationSource source = (ReplicationSource) 
hrs.getReplicationSourceService()
+.getReplicationManager().catalogReplicationSource.get();
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(false);
+  // load the data to the table
+  for (int i = 0; i < 5; i++) {
+LOG.info("Writing data from " + i * 1000 + " to " + (i * 1000 + 1000));
+HTU.loadNumericRows(table, HConstants.CATALOG_FAMILY, i * 1000, i * 
1000 + 1000);
+LOG.info("flushing table");
+HTU.flush(tableName);
+LOG.info("compacting table");
+if (i < 4) {
+  HTU.compact(tableName, false);
+}
+  }
+  
HTU.getHBaseCluster().getMaster().getLogCleaner().triggerCleanerNow().get(1,
+TimeUnit.SECONDS);
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(true);
+  verifyReplication(tableName, numOfMetaReplica, 0, 5000, 
HConstants.CATALOG_FAMILY);

Review Comment:
   `Here we just checked whether data can be read? I think the main thing here 
is that we should make sure the ReplicationSource can still relicate things 
out, i.e, it is not stuck forever.`
   
   It's what we are testing here. We disable the catalog peer before we do the 
flush and compact. When the replication is stuck forever because the FNFE, the 
updates from line #244 never get to the secondary replicas and the 
verifyReplication call on line #255 fails. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5251: HBASE-27876: Only generate SBOM when releasing

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5251:
URL: https://github.com/apache/hbase/pull/5251#issuecomment-1559661796

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 57s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 42s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |   9m  3s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.5.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  33m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5251 |
   | JIRA Issue | HBASE-27876 |
   | Optional Tests | dupname asflicense javac hadoopcheck spotless xml compile 
|
   | uname | Linux 963082a702c0 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / dc30ca552b |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 83 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5251/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #5247: HBASE-27855 Support dynamic adjustment of flusher count

2023-05-23 Thread via GitHub


Apache9 commented on code in PR #5247:
URL: https://github.com/apache/hbase/pull/5247#discussion_r1202502990


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -495,41 +505,54 @@ public int getFlushQueueSize() {
* Only interrupt once it's done with a run through the work loop.
*/
   void interruptIfNecessary() {
-lock.writeLock().lock();
+lock.readLock().lock();

Review Comment:
   Mind explaining a bit why readLock is enough here?



##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -495,41 +505,54 @@ public int getFlushQueueSize() {
* Only interrupt once it's done with a run through the work loop.
*/
   void interruptIfNecessary() {
-lock.writeLock().lock();
+lock.readLock().lock();
 try {
-  for (FlushHandler flushHander : flushHandlers) {
-if (flushHander != null) flushHander.interrupt();
+  for (FlushHandler flushHandler : flushHandlers) {
+if (flushHandler != null) {
+  flushHandler.interrupt();
+}
   }
 } finally {
-  lock.writeLock().unlock();
+  lock.readLock().unlock();
 }
   }
 
   synchronized void start(UncaughtExceptionHandler eh) {
-ThreadFactory flusherThreadFactory = new ThreadFactoryBuilder()
+this.flusherThreadFactory = new ThreadFactoryBuilder()
   .setNameFormat(server.getServerName().toShortString() + 
"-MemStoreFlusher-pool-%d")
   .setDaemon(true).setUncaughtExceptionHandler(eh).build();
-for (int i = 0; i < flushHandlers.length; i++) {
-  flushHandlers[i] = new FlushHandler("MemStoreFlusher." + i);
-  flusherThreadFactory.newThread(flushHandlers[i]);
-  flushHandlers[i].start();
+lock.readLock().lock();
+try {
+  startFlushHandlerThreads(flushHandlers, 0, flushHandlers.length);
+} finally {
+  lock.readLock().unlock();
 }
   }
 
   boolean isAlive() {
-for (FlushHandler flushHander : flushHandlers) {
-  if (flushHander != null && flushHander.isAlive()) {
-return true;
+lock.readLock().lock();
+try {
+  for (FlushHandler flushHandler : flushHandlers) {
+if (flushHandler != null && flushHandler.isAlive()) {
+  return true;
+}
   }
+  return false;
+} finally {
+  lock.readLock().unlock();
 }
-return false;
   }
 
   void join() {

Review Comment:
   The name is join but we call shutdown? Is this the expected behavior? If so 
I think we should change the method name?



##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java:
##
@@ -924,4 +947,62 @@ public boolean equals(Object obj) {
   return compareTo(other) == 0;
 }
   }
+
+  private int getHandlerCount(Configuration conf) {
+int handlerCount = conf.getInt("hbase.hstore.flusher.count", 2);
+if (handlerCount < 1) {
+  LOG.warn(
+"hbase.hstore.flusher.count was configed to {} which is less than 1, " 
+ "corrected to 1",
+handlerCount);
+  handlerCount = 1;
+}
+return handlerCount;
+  }
+
+  @Override
+  public void onConfigurationChange(Configuration newConf) {
+int newHandlerCount = getHandlerCount(newConf);
+if (newHandlerCount != flushHandlers.length) {
+  LOG.info("update hbase.hstore.flusher.count from {} to {}", 
flushHandlers.length,
+newHandlerCount);
+  lock.writeLock().lock();
+  try {
+FlushHandler[] newFlushHandlers = new FlushHandler[newHandlerCount];
+if (newHandlerCount > flushHandlers.length) {
+  System.arraycopy(flushHandlers, 0, newFlushHandlers, 0, 
flushHandlers.length);
+  startFlushHandlerThreads(newFlushHandlers, flushHandlers.length, 
newFlushHandlers.length);
+} else {
+  System.arraycopy(flushHandlers, 0, newFlushHandlers, 0, 
newFlushHandlers.length);
+  stopFlushHandlerThreads(flushHandlers, newHandlerCount, 
flushHandlers.length);
+}
+flusherIdGen.compareAndSet(flushHandlers.length, 
newFlushHandlers.length);
+this.flushHandlers = newFlushHandlers;
+  } finally {
+lock.writeLock().unlock();
+  }
+}
+  }
+
+  private void startFlushHandlerThreads(FlushHandler[] flushHandlers, int 
start, int end) {
+if (flusherThreadFactory != null) {
+  for (int i = start; i < end; i++) {
+flushHandlers[i] = new FlushHandler("MemStoreFlusher." + 
flusherIdGen.getAndIncrement());
+flusherThreadFactory.newThread(flushHandlers[i]);
+flushHandlers[i].start();
+  }
+}
+  }
+
+  private void stopFlushHandlerThreads(FlushHandler[] flushHandlers, int 
start, int end) {
+for (int i = start; i < end; i++) {
+  flushHandlers[i].shutdown();

Review Comment:
   So the shutdown here will not interrupt the current flush operation if any 
right?



##

[jira] [Commented] (HBASE-26149) Further improvements on ConnectionRegistry implementations

2023-05-23 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725463#comment-17725463
 ] 

Duo Zhang commented on HBASE-26149:
---

Was thinking of introducing an URI as the connnection string for a HBase 
cluster.

For example, zk://xxx:2181/hbase, we will use the zk based registry; 
hbase://xxx:16010,yyy:16010, we will use the rpc based regsitry; 
rest://xxx:8080, we will use a rest API to get the registry information(not 
implemented yet, just saying as I saw [~stack] mentioned that we could 
introduce a rest API in the design doc.

And by using the service loader in java, it will be easier for our users to 
customized the registry implementation, for example, they can just implement 
their own registry, for example, k8s service based registry? And then put it in 
the service loader file, and use a special protocol string in the URI.

Thanks.

> Further improvements on ConnectionRegistry implementations
> --
>
> Key: HBASE-26149
> URL: https://issues.apache.org/jira/browse/HBASE-26149
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Reporter: Duo Zhang
>Priority: Major
>
> (Copied in-line from the attached 'Documentation' with some filler as 
> connecting script)
> HBASE-23324 Deprecate clients that connect to Zookeeper
> ^^^ This is always our goal, to remove the zookeeper dependency from the 
> client side.
>  
> See the sub-task HBASE-25051 DIGEST based auth broken for MasterRegistry
> When constructing RpcClient, we will pass the clusterid in, and it will be 
> used to select the authentication method. More specifically, it will be used 
> to select the tokens for digest based authentication, please see the code in 
> BuiltInProviderSelector. For ZKConnectionRegistry, we do not need to use 
> RpcClient to connect to zookeeper, so we could get the cluster id first, and 
> then create the RpcClient. But for MasterRegistry/RpcConnectionRegistry, we 
> need to use RpcClient to connect to the ClientMetaService endpoints and then 
> we can call the getClusterId method to get the cluster id. Because of this, 
> when creating RpcClient for MasterRegistry/RpcConnectionRegistry, we can only 
> pass null or the default cluster id, which means the digest based 
> authentication is broken.
> This is a cyclic dependency problem. Maybe a possible way forward, is to make 
> getClusterId method available to all users, which means it does not require 
> any authentication, so we can always call getClusterId with simple 
> authentication, and then at client side, once we get the cluster id, we 
> create a new RpcClient to select the correct authentication way.
> The work in the sub-task, HBASE-26150 Let region server also carry 
> ClientMetaService, is work to make it so the RegionServers can carry a 
> ConnectionRegistry (rather than have the Masters-only carry it as is the case 
> now). Adds a new method getBootstrapNodes to ClientMetaService, the 
> ConnectionRegistry proto Service, for refreshing the bootstrap nodes 
> periodically or on error. The new *RpcConnectionRegistry*  [Created here but 
> defined in the next sub-task]will use this method to refresh the bootstrap 
> nodes, while the old MasterRegistry will use the getMasters method to refresh 
> the ‘bootstrap’ nodes.
> The getBootstrapNodes method will return all the region servers, so after the 
> first refreshing, the client will go to region servers for later rpc calls. 
> But since masters and region servers both implement the ClientMetaService 
> interface, it is free for the client to configure master as the initial 
> bootstrap nodes.
> The following sub-task then deprecates MasterRegistry, HBASE-26172 Deprecated 
> MasterRegistry
> The implementation of MasterRegistry is almost the same with 
> RpcConnectionRegistry except that it uses getMasters instead of 
> getBootstrapNodes to refresh the ‘bootstrap’ nodes connected to. So we could 
> add configs in server side to control what nodes we want to return to client 
> in getBootstrapNodes, i.e, master or region server, then the 
> RpcConnectionRegistry can fully replace the old MasterRegistry. Deprecates 
> the MasterRegistry.
> Sub-task HBASE-26173 Return only a sub set of region servers as bootstrap 
> nodes
> For a large cluster which may have thousands of region servers, it is not a 
> good idea to return all the region servers as bootstrap nodes to clients. So 
> we should add a config at server side to control the max number of bootstrap 
> nodes we want to return to clients. I think the default value could be 5 or 
> 10, which is enough.
> Sub-task HBASE-26174 Make rpc connection registry the default registry on 
> 3.0.0
> Just a follow up of HBASE-26172. MasterRegistry has been deprecated, we 
> should not make it default for 3.0.0 any 

[jira] [Resolved] (HBASE-27880) Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27880.
---
Fix Version/s: 2.6.0
   3.0.0-alpha-4
   2.5.5
   2.4.18
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests
> ---
>
> Key: HBASE-27880
> URL: https://issues.apache.org/jira/browse/HBASE-27880
> Project: HBase
>  Issue Type: Task
>  Components: dependabot, scripts, security
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27277) TestRaceBetweenSCPAndTRSP fails in pre commit

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27277.
---
Fix Version/s: 2.6.0
   3.0.0-alpha-4
   2.5.5
   2.4.18
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to branch-2.4+.

Thanks [~GeorryHuang] for reviewing!

> TestRaceBetweenSCPAndTRSP fails in pre commit
> -
>
> Key: HBASE-27277
> URL: https://issues.apache.org/jira/browse/HBASE-27277
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.5, 2.4.18
>
> Attachments: 
> org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP-output.txt
>
>
> Seems the PE worker is stuck here. Need dig more.
> {noformat}
> "PEWorker-5" daemon prio=5 tid=326 in Object.wait()
> java.lang.Thread.State: WAITING (on object monitor)
> at java.base@11.0.10/jdk.internal.misc.Unsafe.park(Native Method)
> at 
> java.base@11.0.10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
> at 
> java.base@11.0.10/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
> at 
> java.base@11.0.10/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
> at 
> app//org.apache.hadoop.hbase.master.assignment.TestRaceBetweenSCPAndTRSP$AssignmentManagerForTest.getRegionsOnServer(TestRaceBetweenSCPAndTRSP.java:97)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.getRegionsOnCrashedServer(ServerCrashProcedure.java:288)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:195)
> at 
> app//org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:66)
> at 
> app//org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188)
> at 
> app//org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:919)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread$$Lambda$477/0x000800ac1840.call(Unknown
>  Source)
> at 
> app//org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
> at 
> app//org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1989)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27879) Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27879.
---
Fix Version/s: 3.0.0-alpha-4
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit
> --
>
> Key: HBASE-27879
> URL: https://issues.apache.org/jira/browse/HBASE-27879
> Project: HBase
>  Issue Type: Task
>  Components: dependabot, scripts, security
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 merged pull request #5249: HBASE-27879 Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit

2023-05-23 Thread via GitHub


Apache9 merged PR #5249:
URL: https://github.com/apache/hbase/pull/5249


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27879) Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-27879:
--
Component/s: dependabot

> Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit
> --
>
> Key: HBASE-27879
> URL: https://issues.apache.org/jira/browse/HBASE-27879
> Project: HBase
>  Issue Type: Task
>  Components: dependabot, scripts, security
>Reporter: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27880) Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests

2023-05-23 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27880:
-

 Summary: Bump requests from 2.28.1 to 2.31.0 in 
/dev-support/flaky-tests
 Key: HBASE-27880
 URL: https://issues.apache.org/jira/browse/HBASE-27880
 Project: HBase
  Issue Type: Task
  Components: dependabot, scripts, security
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27879) Bump requests from 2.22.0 to 2.31.0 in /dev-support/git-jira-release-audit

2023-05-23 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27879:
-

 Summary: Bump requests from 2.22.0 to 2.31.0 in 
/dev-support/git-jira-release-audit
 Key: HBASE-27879
 URL: https://issues.apache.org/jira/browse/HBASE-27879
 Project: HBase
  Issue Type: Task
  Components: scripts, security
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-26890) Make the WAL interface async so sync replication can be built up on the WAL interface

2023-05-23 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725443#comment-17725443
 ] 

Duo Zhang commented on HBASE-26890:
---

Plan to work on HBASE-20952 again, to build a WAL implementation without 
relying on external services, so then this one becomes a blocker..

Need to address this first.

> Make the WAL interface async so sync replication can be built up on the WAL 
> interface
> -
>
> Key: HBASE-26890
> URL: https://issues.apache.org/jira/browse/HBASE-26890
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication, wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Instead of hacking into the WAL implementation.
> This could make the implementation more general if later we want to change 
> the WAL implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-26890) Make the WAL interface async so sync replication can be built up on the WAL interface

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-26890 started by Duo Zhang.
-
> Make the WAL interface async so sync replication can be built up on the WAL 
> interface
> -
>
> Key: HBASE-26890
> URL: https://issues.apache.org/jira/browse/HBASE-26890
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication, wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Instead of hacking into the WAL implementation.
> This could make the implementation more general if later we want to change 
> the WAL implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-26890) Make the WAL interface async so sync replication can be built up on the WAL interface

2023-05-23 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-26890:
-

Assignee: Duo Zhang

> Make the WAL interface async so sync replication can be built up on the WAL 
> interface
> -
>
> Key: HBASE-26890
> URL: https://issues.apache.org/jira/browse/HBASE-26890
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication, wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Instead of hacking into the WAL implementation.
> This could make the implementation more general if later we want to change 
> the WAL implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 merged pull request #5248: HBASE-27277 TestRaceBetweenSCPAndTRSP fails in pre commit

2023-05-23 Thread via GitHub


Apache9 merged PR #5248:
URL: https://github.com/apache/hbase/pull/5248


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (HBASE-27876) Only generate SBOM when releasing

2023-05-23 Thread Shuhei Yamasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuhei Yamasaki reassigned HBASE-27876:
---

Assignee: Shuhei Yamasaki

> Only generate SBOM when releasing
> -
>
> Key: HBASE-27876
> URL: https://issues.apache.org/jira/browse/HBASE-27876
> Project: HBase
>  Issue Type: Improvement
>  Components: build, pom
>Reporter: Duo Zhang
>Assignee: Shuhei Yamasaki
>Priority: Minor
>
> The CycloneDX generation slows down the build so we'd better only generate it 
> when releasing, to speed up the building.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] yamasakisua opened a new pull request, #5251: HBASE-27876: Only generate SBOM when releasing

2023-05-23 Thread via GitHub


yamasakisua opened a new pull request, #5251:
URL: https://github.com/apache/hbase/pull/5251

   See details: [HBASE-27876](https://issues.apache.org/jira/browse/HBASE-27876)
   
   I tested following commands.
   
   ```
   $ mvn clean install -DskipTests
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   ...
   $ ls target/
   hbase-3.0.0-alpha-4-SNAPSHOT-site.xml  maven-shared-archive-resources
   ```
   
   ```
   $ mvn clean install -DskipTests -Prelease
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   ...
   $ ls target/
   bom.json  bom.xml  hbase-3.0.0-alpha-4-SNAPSHOT-site.xml  
maven-shared-archive-resources  rat.txt
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #5241: HBASE-27871 Meta replication stuck forever if wal it's still reading gets rolled and deleted

2023-05-23 Thread via GitHub


Apache9 commented on code in PR #5241:
URL: https://github.com/apache/hbase/pull/5241#discussion_r1202369028


##
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestMetaRegionReplicaReplicationEndpoint.java:
##
@@ -225,6 +227,38 @@ public void 
testCatalogReplicaReplicationWithFlushAndCompaction() throws Excepti
 }
   }
 
+  @Test
+  public void testCatalogReplicaReplicationWALRolledAndDeleted() throws 
Exception {
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+TableName tableName = TableName.valueOf("hbase:meta");
+Table table = connection.getTable(tableName);
+try {
+  MiniHBaseCluster cluster = HTU.getHBaseCluster();
+  HRegionServer hrs = 
cluster.getRegionServer(cluster.getServerHoldingMeta());
+  ReplicationSource source = (ReplicationSource) 
hrs.getReplicationSourceService()
+.getReplicationManager().catalogReplicationSource.get();
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(false);
+  // load the data to the table
+  for (int i = 0; i < 5; i++) {
+LOG.info("Writing data from " + i * 1000 + " to " + (i * 1000 + 1000));
+HTU.loadNumericRows(table, HConstants.CATALOG_FAMILY, i * 1000, i * 
1000 + 1000);
+LOG.info("flushing table");
+HTU.flush(tableName);
+LOG.info("compacting table");
+if (i < 4) {
+  HTU.compact(tableName, false);
+}
+  }
+  
HTU.getHBaseCluster().getMaster().getLogCleaner().triggerCleanerNow().get(1,
+TimeUnit.SECONDS);
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(true);
+  verifyReplication(tableName, numOfMetaReplica, 0, 5000, 
HConstants.CATALOG_FAMILY);
+} finally {
+  table.close();

Review Comment:
   Use try with resources.



##
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestMetaRegionReplicaReplicationEndpoint.java:
##
@@ -225,6 +227,38 @@ public void 
testCatalogReplicaReplicationWithFlushAndCompaction() throws Excepti
 }
   }
 
+  @Test
+  public void testCatalogReplicaReplicationWALRolledAndDeleted() throws 
Exception {
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+TableName tableName = TableName.valueOf("hbase:meta");
+Table table = connection.getTable(tableName);
+try {
+  MiniHBaseCluster cluster = HTU.getHBaseCluster();
+  HRegionServer hrs = 
cluster.getRegionServer(cluster.getServerHoldingMeta());
+  ReplicationSource source = (ReplicationSource) 
hrs.getReplicationSourceService()
+.getReplicationManager().catalogReplicationSource.get();
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(false);

Review Comment:
   Is this enough to make sure that the replication is already stopped? The 
check is in another thread, we'd better add some checks here to confirm that 
the replication source is stopped.



##
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestMetaRegionReplicaReplicationEndpoint.java:
##
@@ -225,6 +227,38 @@ public void 
testCatalogReplicaReplicationWithFlushAndCompaction() throws Excepti
 }
   }
 
+  @Test
+  public void testCatalogReplicaReplicationWALRolledAndDeleted() throws 
Exception {
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+TableName tableName = TableName.valueOf("hbase:meta");
+Table table = connection.getTable(tableName);
+try {
+  MiniHBaseCluster cluster = HTU.getHBaseCluster();
+  HRegionServer hrs = 
cluster.getRegionServer(cluster.getServerHoldingMeta());
+  ReplicationSource source = (ReplicationSource) 
hrs.getReplicationSourceService()
+.getReplicationManager().catalogReplicationSource.get();
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(false);
+  // load the data to the table
+  for (int i = 0; i < 5; i++) {
+LOG.info("Writing data from " + i * 1000 + " to " + (i * 1000 + 1000));
+HTU.loadNumericRows(table, HConstants.CATALOG_FAMILY, i * 1000, i * 
1000 + 1000);
+LOG.info("flushing table");
+HTU.flush(tableName);
+LOG.info("compacting table");
+if (i < 4) {
+  HTU.compact(tableName, false);
+}
+  }
+  
HTU.getHBaseCluster().getMaster().getLogCleaner().triggerCleanerNow().get(1,
+TimeUnit.SECONDS);
+  ((ReplicationPeerImpl) source.replicationPeer).setPeerState(true);
+  verifyReplication(tableName, numOfMetaReplica, 0, 5000, 
HConstants.CATALOG_FAMILY);

Review Comment:
   Here we just checked whether data can be read? I think the main thing here 
is that we should make sure the ReplicationSource can still relicate things 
out, i.e, it is not stuck forever.
   
   So maybe we should load some data, flush, delete the WAL, enable the 
replication source, and then load more data, 

[jira] [Created] (HBASE-27878) balance_rsgroup NullPointerException

2023-05-23 Thread zhengsicheng (Jira)
zhengsicheng created HBASE-27878:


 Summary: balance_rsgroup NullPointerException
 Key: HBASE-27878
 URL: https://issues.apache.org/jira/browse/HBASE-27878
 Project: HBase
  Issue Type: Bug
Reporter: zhengsicheng
Assignee: zhengsicheng


hbase(main):001:0> balance_rsgroup 'default'

ERROR: java.io.IOException: Cannot invoke 
"org.apache.hadoop.hbase.ServerName.getAddress()" because "currentHostServer" 
is null
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:466)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.hadoop.hbase.ServerName.getAddress()" because "currentHostServer" 
is null
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer.correctAssignments(RSGroupBasedLoadBalancer.java:320)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer.balanceCluster(RSGroupBasedLoadBalancer.java:126)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:461)
        at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:301)
        at 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:14948)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:921)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:394)
        ... 3 more

For usage try 'help "balance_rsgroup"'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5226: [Draft] HBASE-27798: Client side should back off based on wait interval in RpcThrottlingException

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5226:
URL: https://github.com/apache/hbase/pull/5226#issuecomment-1559243142

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 38s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 50s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 14s |  hbase-client: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  22m 13s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.5.  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  40m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5226 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 5d78f800ba8b 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 
24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 0ba562ab4d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 3) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5226: [Draft] HBASE-27798: Client side should back off based on wait interval in RpcThrottlingException

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5226:
URL: https://github.com/apache/hbase/pull/5226#issuecomment-1559210132

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 59s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 47s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 36s |  hbase-client in the patch passed.  
|
   |  |   |  28m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5226 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f9d1570abc0b 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 0ba562ab4d |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/testReport/
 |
   | Max. process+thread count | 361 (vs. ulimit of 3) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5226: [Draft] HBASE-27798: Client side should back off based on wait interval in RpcThrottlingException

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #5226:
URL: https://github.com/apache/hbase/pull/5226#issuecomment-1559207002

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 53s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 27s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 50s |  hbase-client in the patch passed.  
|
   |  |   |  25m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5226 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 688a2197b551 5.4.0-148-generic #165-Ubuntu SMP Tue Apr 18 
08:53:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 0ba562ab4d |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/testReport/
 |
   | Max. process+thread count | 348 (vs. ulimit of 3) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5226/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] thangTang commented on pull request #5243: HBASE-27873 Asyncfs may print too many WARN logs when replace writer

2023-05-23 Thread via GitHub


thangTang commented on PR #5243:
URL: https://github.com/apache/hbase/pull/5243#issuecomment-1558842120

   > Since we use exponential backoff here, the log output is acceptable? We 
will soon increase the interval between each warn message? 
   
   Yes you are right, but still offen seen it. And its level is WARN, this 
makes me nervous, But after doing a little research I figured out that this 
shouldn't be a problem, that's why I want to change it.
   
   > At client side, we have configuration to not output the error message in 
the first several retries, it is called 
`hbase.client.start.log.errors.counter`. Maybe we can apply the same pattern 
here?
   
   Can do that, but this config is a client side config(as its name), do you 
think we need to introduce a new server side config?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-operator-tools] Apache-HBase commented on pull request #121: HBASE-27831 Introduce zookeeper-single-instance component

2023-05-23 Thread via GitHub


Apache-HBase commented on PR #121:
URL: 
https://github.com/apache/hbase-operator-tools/pull/121#issuecomment-1558813801

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +0 :ok: |  yamllint  |   0m  0s |  yamllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ HBASE-27827-kubernetes-deployment Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  
HBASE-27827-kubernetes-deployment passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  
HBASE-27827-kubernetes-deployment passed  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  
HBASE-27827-kubernetes-deployment passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  6s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  hadolint  |   0m  0s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  hbase-kubernetes-deployment in the 
patch passed.  |
   | -1 :x: |  unit  |   6m 14s |  root in the patch failed.  |
   | -1 :x: |  unit  |   0m  5s |  hbase-kubernetes-kustomize in the patch 
failed.  |
   | +1 :green_heart: |  unit  |   0m  5s |  hbase-kubernetes-testing-image in 
the patch passed.  |
   | -1 :x: |  asflicense  |   0m 20s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  13m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-operator-tools/pull/121 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs javac 
javadoc unit xml compile markdownlint yamllint |
   | uname | Linux 4e638f4ec2b2 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | HBASE-27827-kubernetes-deployment / f521479 |
   | Default Java | Oracle Corporation-1.8.0_342-b07 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/artifact/yetus-precommit-check/output/patch-unit-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/artifact/yetus-precommit-check/output/patch-unit-hbase-kubernetes-deployment_hbase-kubernetes-kustomize.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/testReport/
 |
   | asflicense | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/artifact/yetus-precommit-check/output/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1277 (vs. ulimit of 5000) |
   | modules | C: hbase-kubernetes-deployment . 
hbase-kubernetes-deployment/hbase-kubernetes-kustomize 
hbase-kubernetes-deployment/hbase-kubernetes-testing-image U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-121/7/console
 |
   | versions | git=2.30.2 maven=3.8.6 shellcheck=0.7.1 hadolint=Haskell 
Dockerfile Linter 2.12.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ragarkar commented on pull request #5210: HBASE-27820: HBase is not starting due to Jersey library conflicts wi…

2023-05-23 Thread via GitHub


ragarkar commented on PR #5210:
URL: https://github.com/apache/hbase/pull/5210#issuecomment-1558692051

   retest


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ragarkar commented on pull request #5210: HBASE-27820: HBase is not starting due to Jersey library conflicts wi…

2023-05-23 Thread via GitHub


ragarkar commented on PR #5210:
URL: https://github.com/apache/hbase/pull/5210#issuecomment-1558691748

   The test failures seen above are because of timeouts. Is it possible to 
either suppress these errors or rerun the tests to see if they pass during 
rerun?
   
   
   [ERROR] org.apache.hadoop.hbase.io.hfile.TestBlockEvictionOnRegionMovement  
Time elapsed: 772.781 s  <<< ERROR!
   org.junit.runners.model.TestTimedOutException: test timed out after 780 
seconds
at java.base@11.0.17/java.lang.Object.wait(Native Method)
at java.base@11.0.17/java.lang.Thread.join(Thread.java:1308)
at 
app//org.apache.hadoop.hbase.util.Threads.threadDumpingIsAlive(Threads.java:111)
at 
app//org.apache.hadoop.hbase.LocalHBaseCluster.join(LocalHBaseCluster.java:396)
at 
app//org.apache.hadoop.hbase.SingleProcessHBaseCluster.waitUntilShutDown(SingleProcessHBaseCluster.java:886)
at 
app//org.apache.hadoop.hbase.HBaseTestingUtil.shutdownMiniHBaseCluster(HBaseTestingUtil.java:1060)
at 
app//org.apache.hadoop.hbase.HBaseTestingUtil.shutdownMiniCluster(HBaseTestingUtil.java:1042)
at 
app//org.apache.hadoop.hbase.io.hfile.TestBlockEvictionOnRegionMovement.tearDown(TestBlockEvictionOnRegionMovement.java:173)
at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566)
at 
app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
app//org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at 
app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
app//org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at app//org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at app//org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at app//org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at 
app//org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at app//org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at app//org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at 
java.base@11.0.17/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base@11.0.17/java.lang.Thread.run(Thread.java:829)
   
   
   
   
   [ERROR] 
org.apache.hadoop.hbase.replication.TestReplicationSmallTests.testGetReplicationPeerState[0:
 serialPeer=true]  Time elapsed: 606.411 s  <<< ERROR!
   org.apache.hadoop.hbase.exceptions.TimeoutIOException: 
java.util.concurrent.TimeoutException
at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:202)
at 
org.apache.hadoop.hbase.client.Admin.removeReplicationPeer(Admin.java:1998)
at 
org.apache.hadoop.hbase.replication.TestReplicationBase.removePeer(TestReplicationBase.java:309)
at 
org.apache.hadoop.hbase.replication.TestReplicationBase.tearDownBase(TestReplicationBase.java:315)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at