[jira] [Created] (HBASE-27728) Implement a tool to migrate replication peer data between different storage implementation

2023-03-16 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27728:
-

 Summary: Implement a tool to migrate replication peer data between 
different storage implementation
 Key: HBASE-27728
 URL: https://issues.apache.org/jira/browse/HBASE-27728
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Duo Zhang


Replication peer data is usually stable unless you manually modify it, so it is 
OK to not migrate automatically. Instead, we could provide a tool to migrate 
the replication peer data.

And after migration, users can use online configuration change or via a cluster 
a restart to load the new configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-27726) ruby shell not handled SyntaxError exceptions properly

2023-03-16 Thread Rishabh Murarka (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701518#comment-17701518
 ] 

Rishabh Murarka edited comment on HBASE-27726 at 3/17/23 5:38 AM:
--

Hi, I would like to work on this. Can you please assign this to me.


was (Author: JIRAUSER299360):
Hi, I would like to work upon this. Can you please assign this to me.

> ruby shell not handled SyntaxError exceptions properly
> --
>
> Key: HBASE-27726
> URL: https://issues.apache.org/jira/browse/HBASE-27726
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.5.2
>Reporter: chiranjeevi
>Priority: Minor
>
> hbase:002:0> create 't2', 'cf'
> 2023-03-14 04:54:50,061 INFO  [main] client.HBaseAdmin: Operation: CREATE, 
> Table Name: default:t2, procId: 2140 completed
> Created table t2
> Took 1.1503 seconds
> => Hbase::Table - t2
> hbase:003:0> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
> SyntaxError: (hbase):3: syntax error, unexpected tIDENTIFIER
> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
>  ^~~
>   eval at org/jruby/RubyKernel.java:1091
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/workspace.rb:85
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:385
>     eval_input at uri:classloader:/irb/hirb.rb:115
>  signal_status at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:647
>     eval_input at uri:classloader:/irb/hirb.rb:112
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:246
>   loop at org/jruby/RubyKernel.java:1507
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:232
>  catch at org/jruby/RubyKernel.java:1237
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:231
>     eval_input at uri:classloader:/irb/hirb.rb:111
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:428
>  catch at org/jruby/RubyKernel.java:1237
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:427
>  at classpath:/jar-bootstrap.rb:226



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27727) Implement filesystem based Replication peer storage

2023-03-16 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27727:
-

 Summary: Implement filesystem based Replication peer storage
 Key: HBASE-27727
 URL: https://issues.apache.org/jira/browse/HBASE-27727
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27726) ruby shell not handled SyntaxError exceptions properly

2023-03-16 Thread Rishabh Murarka (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701518#comment-17701518
 ] 

Rishabh Murarka commented on HBASE-27726:
-

Hi, I would like to work upon this. Can you please assign this to me.

> ruby shell not handled SyntaxError exceptions properly
> --
>
> Key: HBASE-27726
> URL: https://issues.apache.org/jira/browse/HBASE-27726
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.5.2
>Reporter: chiranjeevi
>Priority: Minor
>
> hbase:002:0> create 't2', 'cf'
> 2023-03-14 04:54:50,061 INFO  [main] client.HBaseAdmin: Operation: CREATE, 
> Table Name: default:t2, procId: 2140 completed
> Created table t2
> Took 1.1503 seconds
> => Hbase::Table - t2
> hbase:003:0> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
> SyntaxError: (hbase):3: syntax error, unexpected tIDENTIFIER
> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
>  ^~~
>   eval at org/jruby/RubyKernel.java:1091
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/workspace.rb:85
>   evaluate at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:385
>     eval_input at uri:classloader:/irb/hirb.rb:115
>  signal_status at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:647
>     eval_input at uri:classloader:/irb/hirb.rb:112
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:246
>   loop at org/jruby/RubyKernel.java:1507
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:232
>  catch at org/jruby/RubyKernel.java:1237
>   each_top_level_statement at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:231
>     eval_input at uri:classloader:/irb/hirb.rb:111
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:428
>  catch at org/jruby/RubyKernel.java:1237
>    run at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:427
>  at classpath:/jar-bootstrap.rb:226



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] virajjasani commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


virajjasani commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1473144263

   Thanks @NihalJain, +1.
   Will merge this in a day's time if there is no objection.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27723) Fix brotli4j licence issue on native-osx-aarch64

2023-03-16 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701507#comment-17701507
 ] 

Duo Zhang commented on HBASE-27723:
---

This is for fixing the problem when building on M1 mac?

> Fix brotli4j licence issue on native-osx-aarch64
> 
>
> Key: HBASE-27723
> URL: https://issues.apache.org/jira/browse/HBASE-27723
> Project: HBase
>  Issue Type: Improvement
>Reporter: Frens Jan Rumph
>Priority: Major
>
> Apparently the licence of {{brotli4j}} is malformed and fixed in 
> {{{}supplemental-models.xml{}}}. It didn't however cover 
> {{native-osx-aarch64}} yet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701500#comment-17701500
 ] 

Hudson commented on HBASE-27686:


Results for branch branch-2
[build #769 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27708) CPU hot-spot resolving User subject

2023-03-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701499#comment-17701499
 ] 

Hudson commented on HBASE-27708:


Results for branch branch-2
[build #769 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/769/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CPU hot-spot resolving User subject
> ---
>
> Key: HBASE-27708
> URL: https://issues.apache.org/jira/browse/HBASE-27708
> Project: HBase
>  Issue Type: Bug
>  Components: Client, tracing
>Affects Versions: 2.5.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4
>
> Attachments: 27708.jpg
>
>
> Even with OpenTelemetry tracing disabled, we see contention related to 
> populating the string representation of the User principle on the client 
> side. Can HBase connection cache this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-24762) Purge protobuf java 2.5.0 dependency

2023-03-16 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701497#comment-17701497
 ] 

Duo Zhang commented on HBASE-24762:
---

The CP implementation for 2.x still depends on protobuf-2.5 so it is not easy 
to purge protobuf-2.5 dependency for 2.x.

See here HBASE-27436.

> Purge protobuf java 2.5.0 dependency
> 
>
> Key: HBASE-24762
> URL: https://issues.apache.org/jira/browse/HBASE-24762
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> On master branch, we have removed the hbase-protocol module so in general, we 
> do not need to depend on protobuf 2.5.0 directl. Especially for hadoop 3.3.0, 
> hadoop will not depend on 2.5.0 any more, we should make sure hbase do not 
> introduce protobuf 2.5.0 too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1473088622

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 56s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 19s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 224m 27s |  hbase-server in the patch passed.  
|
   |  |   | 249m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fab52218cfb4 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/testReport/
 |
   | Max. process+thread count | 2556 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1473076619

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 49s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 204m 15s |  hbase-server in the patch passed.  
|
   |  |   | 230m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 24f6bff8b659 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/testReport/
 |
   | Max. process+thread count | 2500 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27725) HBase blocking thread on java.util.concurrent.ConcurrentHashMap.computeIfAbsent

2023-03-16 Thread chenfengge (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenfengge updated HBASE-27725:
---
Summary: HBase blocking thread on 
java.util.concurrent.ConcurrentHashMap.computeIfAbsent  (was: HBase blocking 
thread on )

> HBase blocking thread on 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent
> ---
>
> Key: HBASE-27725
> URL: https://issues.apache.org/jira/browse/HBASE-27725
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: chenfengge
>Priority: Major
>
> Made hbase pe RandomRead test for 100% memory-cache in HBase 2.5.0 before.
> We found that there are many blocking regionserver handler threads, and our 
> cpu utilisation percentage is about 50% and cannot increase anymore.
> Here is the blocking stack:
> RpcServer.default.FPBQ.Fifo.handler=116,port=16020" #299 daemon prio=5 
> os_prio=0 tid=0x825d5000 nid=0x208e51 waiting for monitor entry 
> [0xffbe67734000]
>     java.lang.Thread.State: BLOCKED (on object monitor)
>      at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1674)
>      - waiting to lock <0xfff75f465128> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.getOrCreateTableMeter(MetricsTableQueryMeterImpl.java:77)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.updateTableReadQueryMeter(MetricsTableQueryMeterImpl.java:82)
>      at 
> org.apache.hadoop.hbase.regionserver.RegionServerTableMetrics.updateTableReadQueryMeter(RegionServerTableMetrics.java:93)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updateReadQueryMeter(MetricsRegionServer.java:283)
>      at 
> org.apache.hadoop.hbase.regionserver.HRegion.metricsUpdateForGet(HRegion.java:7401)
>      at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2667)
>      at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2567)
>      at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45945)
>      at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:385)
>      at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>      at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
>      at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27726) ruby shell not handled SyntaxError exceptions properly

2023-03-16 Thread chiranjeevi (Jira)
chiranjeevi created HBASE-27726:
---

 Summary: ruby shell not handled SyntaxError exceptions properly
 Key: HBASE-27726
 URL: https://issues.apache.org/jira/browse/HBASE-27726
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.5.2
Reporter: chiranjeevi


hbase:002:0> create 't2', 'cf'
2023-03-14 04:54:50,061 INFO  [main] client.HBaseAdmin: Operation: CREATE, 
Table Name: default:t2, procId: 2140 completed
Created table t2
Took 1.1503 seconds
=> Hbase::Table - t2
hbase:003:0> alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
SyntaxError: (hbase):3: syntax error, unexpected tIDENTIFIER
alter 't2', NAME ⇒ 'cf', VERSIONS ⇒ 5
 ^~~
  eval at org/jruby/RubyKernel.java:1091
  evaluate at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/workspace.rb:85
  evaluate at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:385
    eval_input at uri:classloader:/irb/hirb.rb:115
 signal_status at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:647
    eval_input at uri:classloader:/irb/hirb.rb:112
  each_top_level_statement at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:246
  loop at org/jruby/RubyKernel.java:1507
  each_top_level_statement at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:232
 catch at org/jruby/RubyKernel.java:1237
  each_top_level_statement at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:231
    eval_input at uri:classloader:/irb/hirb.rb:111
   run at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:428
 catch at org/jruby/RubyKernel.java:1237
   run at 
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:427
 at classpath:/jar-bootstrap.rb:226



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27725) HBase blocking thread on

2023-03-16 Thread chenfengge (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenfengge updated HBASE-27725:
---
Summary: HBase blocking thread on   (was: Add ConcurrentHashMap#get() call 
before ConcurrentHashMap#computeIfAbsent())

> HBase blocking thread on 
> -
>
> Key: HBASE-27725
> URL: https://issues.apache.org/jira/browse/HBASE-27725
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: chenfengge
>Priority: Major
>
> Made hbase pe RandomRead test for 100% memory-cache in HBase 2.5.0 before.
> We found that there are many blocking regionserver handler threads, and our 
> cpu utilisation percentage is about 50% and cannot increase anymore.
> Here is the blocking stack:
> RpcServer.default.FPBQ.Fifo.handler=116,port=16020" #299 daemon prio=5 
> os_prio=0 tid=0x825d5000 nid=0x208e51 waiting for monitor entry 
> [0xffbe67734000]
>     java.lang.Thread.State: BLOCKED (on object monitor)
>      at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1674)
>      - waiting to lock <0xfff75f465128> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.getOrCreateTableMeter(MetricsTableQueryMeterImpl.java:77)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.updateTableReadQueryMeter(MetricsTableQueryMeterImpl.java:82)
>      at 
> org.apache.hadoop.hbase.regionserver.RegionServerTableMetrics.updateTableReadQueryMeter(RegionServerTableMetrics.java:93)
>      at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updateReadQueryMeter(MetricsRegionServer.java:283)
>      at 
> org.apache.hadoop.hbase.regionserver.HRegion.metricsUpdateForGet(HRegion.java:7401)
>      at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2667)
>      at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2567)
>      at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45945)
>      at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:385)
>      at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>      at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
>      at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27725) Add ConcurrentHashMap#get() call before ConcurrentHashMap#computeIfAbsent()

2023-03-16 Thread chenfengge (Jira)
chenfengge created HBASE-27725:
--

 Summary: Add ConcurrentHashMap#get() call before 
ConcurrentHashMap#computeIfAbsent()
 Key: HBASE-27725
 URL: https://issues.apache.org/jira/browse/HBASE-27725
 Project: HBase
  Issue Type: Improvement
  Components: Performance
Reporter: chenfengge


Made hbase pe RandomRead test for 100% memory-cache in HBase 2.5.0 before.

We found that there are many blocking regionserver handler threads, and our cpu 
utilisation percentage is about 50% and cannot increase anymore.

Here is the blocking stack:

RpcServer.default.FPBQ.Fifo.handler=116,port=16020" #299 daemon prio=5 
os_prio=0 tid=0x825d5000 nid=0x208e51 waiting for monitor entry 
[0xffbe67734000]

    java.lang.Thread.State: BLOCKED (on object monitor)
     at 
java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1674)
     - waiting to lock <0xfff75f465128> (a 
java.util.concurrent.ConcurrentHashMap$Node)
     at 
org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.getOrCreateTableMeter(MetricsTableQueryMeterImpl.java:77)
     at 
org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.updateTableReadQueryMeter(MetricsTableQueryMeterImpl.java:82)
     at 
org.apache.hadoop.hbase.regionserver.RegionServerTableMetrics.updateTableReadQueryMeter(RegionServerTableMetrics.java:93)
     at 
org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updateReadQueryMeter(MetricsRegionServer.java:283)
     at 
org.apache.hadoop.hbase.regionserver.HRegion.metricsUpdateForGet(HRegion.java:7401)
     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2667)
     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2567)
     at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45945)
     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:385)
     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
     at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
     at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] chaijunjie0101 commented on pull request #5106: HBASE-27718 The regionStateNode only need remove once in regionOffline

2023-03-16 Thread via GitHub


chaijunjie0101 commented on PR #5106:
URL: https://github.com/apache/hbase/pull/5106#issuecomment-1473033541

   > It was like this in the beginning, when introduced in 
[HBASE-14614](https://issues.apache.org/jira/browse/HBASE-14614), for me I do 
not think we need this extra line to remove it from offlineRegions.
   
   thanks for ansering, i think so...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] vli02 commented on pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


vli02 commented on PR #5081:
URL: https://github.com/apache/hbase/pull/5081#issuecomment-1472980140

   Are these two known/flaky test failures? Can anyone help re-run the pipeline 
if needed? Thanks!
   
https://ci-hbase.apache.org/blue/organizations/jenkins/HBase-PreCommit-GitHub-PR/detail/PR-5081/8/tests
   @virajjasani @apurtell 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5081:
URL: https://github.com/apache/hbase/pull/5081#issuecomment-1472974281

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 39s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 31s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   8m  7s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 207m  6s |  hbase-server in the patch failed.  |
   |  |   | 242m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5081 |
   | JIRA Issue | HBASE-27684 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux d06fc77d13ef 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/testReport/
 |
   | Max. process+thread count | 2716 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5081:
URL: https://github.com/apache/hbase/pull/5081#issuecomment-1472960421

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 22s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 39s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 194m 32s |  hbase-server in the patch passed.  
|
   |  |   | 225m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5081 |
   | JIRA Issue | HBASE-27684 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c8f1da381f87 5.4.0-1093-aws #102~18.04.2-Ubuntu SMP Wed Dec 
7 00:31:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/testReport/
 |
   | Max. process+thread count | 2541 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1472936798

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 32s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 58s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m  5s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  9s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  3s |  the patch passed  |
   | -0 :warning: |  javac  |   2m 27s |  hbase-server generated 1 new + 194 
unchanged - 1 fixed = 195 total (was 195)  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 51s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 38s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 0aa99915d987 5.4.0-1093-aws #102~18.04.2-Ubuntu SMP Wed Dec 
7 00:31:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
 |
   | Max. process+thread count | 78 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1472904311

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 19s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 23s |  hbase-server generated 1 new + 23 
unchanged - 0 fixed = 24 total (was 23)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 11s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 225m 56s |  hbase-server in the patch passed.  
|
   |  |   | 250m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 30ef047bc159 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/testReport/
 |
   | Max. process+thread count | 2710 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1472896658

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 23s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 56s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 26s |  hbase-server generated 1 new + 96 
unchanged - 0 fixed = 97 total (was 96)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 30s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 211m 25s |  hbase-server in the patch passed.  
|
   |  |   | 238m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 3457dcab9cd5 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/artifact/yetus-jdk11-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/testReport/
 |
   | Max. process+thread count | 2458 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:41 PM:
---

Let's say as an example that for compressing a particular HFile block the 
BlockCompressionStream will call compress() three times and then finish().

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.

||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. I looked at the zstd-jni code and indeed it uses 
ZSTD_compressStream, ZSTD_finishStream, ZSTD_decompressStream in the same way 
as the Hadoop native codec when on the java side you use those stream classes. 
It would amount to implementing a new HBase codec. Call it ZstdStreamCodec 
maybe.

I could try that and see what happens, if the implementations could be read and 
write compatible?


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. I looked at the zstd-jni code and indeed it uses 
ZSTD_compressStream, ZSTD_finishStream, ZSTD_decompressStream in the same way 
as the Hadoop native codec when on the java side you use those stream classes. 
It would amount to implementing a new HBase codec. Call it ZstdStreamCodec 
maybe.

I could try that and see what happens, if the implementations could be read and 
write compatible?

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:36 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. I looked at the zstd-jni code and indeed it uses 
ZSTD_compressStream, ZSTD_finishStream, ZSTD_decompressStream in the same way 
as the Hadoop native codec when on the java side you use those stream classes. 
It would amount to implementing a new HBase codec. Call it ZstdStreamCodec 
maybe.

I could try that and see what happens, if the implementations could be read and 
write compatible?


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. I looked at the zstd-jni code and indeed it uses 
ZSTD_compressStream, ZSTD_finishStream, ZSTD_decompressStream in the same way 
as the Hadoop native codec. It would amount to implementing a new HBase codec. 
Call it ZstdStreamCodec maybe.

I could try that and see what happens, if the implementations could be read and 
write compatible?

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:35 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. I looked at the zstd-jni code and indeed it uses 
ZSTD_compressStream, ZSTD_finishStream, ZSTD_decompressStream in the same way 
as the Hadoop native codec. It would amount to implementing a new HBase codec. 
Call it ZstdStreamCodec maybe.

I could try that and see what happens, if the implementations could be read and 
write compatible?


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec. Call it 
ZstdStreamCodec maybe. 

I could try that and see what happens, if the implementations could be read and 
write compatible?

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:32 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec. Call it 
ZstdStreamCodec maybe. 

I could try that and see what happens, if the implementations could be read and 
write compatible?


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:29 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

One of the reasons we do that is aircompressor's limited zstandard support only 
operates in one-shot mode and does not offer a streaming API compatible with 
the native C zstandard library.

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1148)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1091)
> at 
> 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:26 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the Zstandard one shot API.
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|ZSTD_compress2 (via zstd-jni's Zstd#compress())|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the zstd-jni one shot API here: 
[https://github.com/luben/zstd-jni/blob/master/src/main/java/com/github/luben/zstd/Zstd.java]
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|Zstd.compress|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1148)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1091)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:994)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:941)
> at 
> 

[jira] [Comment Edited] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell edited comment on HBASE-27706 at 3/16/23 10:22 PM:
---

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the zstd-jni one shot API here: 
[https://github.com/luben/zstd-jni/blob/master/src/main/java/com/github/luben/zstd/Zstd.java]
||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|Zstd.compress|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.
||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible.

In theory a new HBase native zstandard codec could do the same sequence of 
operations as Hadoop's native one by using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. I am not sure the behavior would be 100% identical but maybe 
compatible enough. It would amount to implementing a new codec.


was (Author: apurtell):
HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the zstd-jni one shot API here: 
https://github.com/luben/zstd-jni/blob/master/src/main/java/com/github/luben/zstd/Zstd.java

||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|Zstd.compress|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.

||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible. 

In theory a new HBase native zstandard codec could do the same using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. It would amount to implementing a new codec. 

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1148)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1091)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:994)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:941)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7228)
> 

[jira] [Commented] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701410#comment-17701410
 ] 

Andrew Kyle Purtell commented on HBASE-27706:
-

HBase native codecs all work by appending to a buffer and then compressing the 
whole buffer in one shot when the BlockCompressionStream calls the codec 
finish() method. This is how lz4 and snappy and some other codecs work both in 
Hadoop native and HBase implementations, and it was extended to all cases for 
simplicity. We use the zstd-jni one shot API here: 
https://github.com/luben/zstd-jni/blob/master/src/main/java/com/github/luben/zstd/Zstd.java

||BlockCompressionStream calls this||Codec does this||
|compress|buffer|
|compress|buffer|
|compress|buffer|
|finish|Zstd.compress|

The hadoop native zstd codec uses the zstandard C library's streaming API, 
ZSTD_compressStream and ZSTD_decompressStream, basically wrapping the hierarchy 
of compression related Java stream APIs around this underlying native stream 
API.

||BlockCompressionStream calls this||Codec does this||
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|compress|ZSTD_compressStream|
|finish|ZSTD_finishStream|

So what you get framed in the blocks of BlockCompressionStream is different 
enough to not be compatible. 

In theory a new HBase native zstandard codec could do the same using zstd-jni's 
ZstdDirectBufferCompressingStream and ZstdDirectBufferDecompressingStream 
instead. It would amount to implementing a new codec. 

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1148)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1091)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:994)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:941)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7228)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7183)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7159)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7118)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7074)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:147)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:100)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.openStoreFiles(StoreEngine.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.initialize(StoreEngine.java:338)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6359)
> at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5081:
URL: https://github.com/apache/hbase/pull/5081#issuecomment-1472808998

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 10s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 44s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 19s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 29s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 20s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5081 |
   | JIRA Issue | HBASE-27684 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 591ab09f33b3 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5081/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] vli02 commented on a diff in pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


vli02 commented on code in PR #5081:
URL: https://github.com/apache/hbase/pull/5081#discussion_r1139363918


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetricsConnection.java:
##
@@ -443,6 +447,15 @@ protected Ratio getRatio() {
 this.nsLookups = registry.counter(name(this.getClass(), NS_LOOKUPS, 
scope));
 this.nsLookupsFailed = registry.counter(name(this.getClass(), 
NS_LOOKUPS_FAILED, scope));
 
+this.userRegionLockTimeoutCount =
+  registry.counter(name(this.getClass(), "userRegionLockTimeoutCount", 
scope));
+this.userRegionLockWaitingTimer =
+  registry.timer(name(this.getClass(), "userRegionLockWaitingDurationMs", 
scope));

Review Comment:
   Sounds good to me. I have just updated by removing the `Ms` in the two new 
metrics I am adding in this PR. Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] apurtell commented on a diff in pull request #5081: HBASE-27684: add client metrics related to user region lock.

2023-03-16 Thread via GitHub


apurtell commented on code in PR #5081:
URL: https://github.com/apache/hbase/pull/5081#discussion_r1139343792


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetricsConnection.java:
##
@@ -443,6 +447,15 @@ protected Ratio getRatio() {
 this.nsLookups = registry.counter(name(this.getClass(), NS_LOOKUPS, 
scope));
 this.nsLookupsFailed = registry.counter(name(this.getClass(), 
NS_LOOKUPS_FAILED, scope));
 
+this.userRegionLockTimeoutCount =
+  registry.counter(name(this.getClass(), "userRegionLockTimeoutCount", 
scope));
+this.userRegionLockWaitingTimer =
+  registry.timer(name(this.getClass(), "userRegionLockWaitingDurationMs", 
scope));

Review Comment:
   While there is no universal practice or guideline here I don't think we 
should encode the time unit into the metric name. The unit for the metric can 
and should be documented. Once in place per operational compatibility 
guidelines it won't change. Making the unit in the name redundant.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27652) Client-side lock contention around Configuration when using read replica regions

2023-03-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701369#comment-17701369
 ] 

Hudson commented on HBASE-27652:


Results for branch branch-2.5
[build #318 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/318/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/318/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/318/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/318/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/318/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Client-side lock contention around Configuration when using read replica 
> regions
> 
>
> Key: HBASE-27652
> URL: https://issues.apache.org/jira/browse/HBASE-27652
> Project: HBase
>  Issue Type: Bug
>  Components: Client, read replicas
>Affects Versions: 2.5.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 2.5.4
>
> Attachments: HBASE-27652 flamegraph snippet.png
>
>
> Since upgrading to 2.5.1 our client-side application has noticed lock 
> contention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5109:
URL: https://github.com/apache/hbase/pull/5109#issuecomment-1472680495

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  master passed  |
   | +1 :green_heart: |  compile  |   3m  1s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 42s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 10s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 12s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 20s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  42m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5109 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 54684b950407 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 58cb1f4799 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 81 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5109/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] NihalJain commented on a diff in pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


NihalJain commented on code in PR #5109:
URL: https://github.com/apache/hbase/pull/5109#discussion_r1139295396


##
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java:
##
@@ -465,4 +465,18 @@ public ListMultimap run() throws 
Exception {
 return snapshot.toBuilder()
   
.setUsersAndPermissions(ShadedAccessControlUtil.toUserTablePermissions(perms)).build();
   }
+
+  /**
+   * Method to check whether TTL has expired for specified snapshot creation 
time and snapshot ttl.
+   * NOTE: For backward compatibility (after the patch deployment on HMaster), 
any snapshot with ttl
+   * 0 is to be considered as snapshot to keep FOREVER. Default ttl value 
specified by
+   * {@link HConstants.DEFAULT_SNAPSHOT_TTL}

Review Comment:
   Ah my bad, let me fix



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27724) [HBCK2] addFsRegionsMissingInMeta command should support dumping region list into a file which can be passed as input to assigns command

2023-03-16 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-27724:
---
Description: 
_addFsRegionsMissingInMeta_ command currently outputs a command as last line of 
output which needs to be run with hbck2
{code:java}
assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
90e3414947f9500ec01f6672103f29d0{code}
This is good, but the user has to copy and format the command, which can get 
really big depending on how many regions need to be assigned.

_addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping 
region list into a file, which can be passed onto as input to _assigns_ command 
via -i parameter.

Sample expected use-case:
{code:java}
# Dump output of command (in a formatted manner) to file
hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -i 
table_list.txt -f regions_to_assign.txt

# Pass file as input to assigns
hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
 

  was:
_addFsRegionsMissingInMeta_ command currently outputs a command as last line of 
output which needs to be run with hbck2
{code:java}
assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
90e3414947f9500ec01f6672103f29d0{code}
This is good, but the user has to copy and format the command, which can get 
really big depending on how many regions need to be assigned.

_addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping 
region list into a file, which can be passed onto as input to _assigns_ command 
via -i parameter.

Sample expected use-case:
{code:java}
# Dump output of command (in a formatted manner) to file
hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f 
regions_to_assign.txt

# Pass file as input to assigns
hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
 


> [HBCK2]  addFsRegionsMissingInMeta command should support dumping region list 
> into a file which can be passed as input to assigns command
> -
>
> Key: HBASE-27724
> URL: https://issues.apache.org/jira/browse/HBASE-27724
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase-operator-tools, hbck2
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>
> _addFsRegionsMissingInMeta_ command currently outputs a command as last line 
> of output which needs to be run with hbck2
> {code:java}
> assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
> 5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
> 7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
> 90e3414947f9500ec01f6672103f29d0{code}
> This is good, but the user has to copy and format the command, which can get 
> really big depending on how many regions need to be assigned.
> _addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate 
> dumping region list into a file, which can be passed onto as input to 
> _assigns_ command via -i parameter.
> Sample expected use-case:
> {code:java}
> # Dump output of command (in a formatted manner) to file
> hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -i 
> table_list.txt -f regions_to_assign.txt
> # Pass file as input to assigns
> hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27724) [HBCK2] addFsRegionsMissingInMeta command should support dumping region list into a file which can be passed as input to assigns command

2023-03-16 Thread Nihal Jain (Jira)
Nihal Jain created HBASE-27724:
--

 Summary: [HBCK2]  addFsRegionsMissingInMeta command should support 
dumping region list into a file which can be passed as input to assigns command
 Key: HBASE-27724
 URL: https://issues.apache.org/jira/browse/HBASE-27724
 Project: HBase
  Issue Type: Improvement
  Components: hbase-operator-tools, hbck2
Reporter: Nihal Jain
Assignee: Nihal Jain


_addFsRegionsMissingInMeta_ command currently outputs a command as last line of 
output which needs to be run with hbck2
{code:java}
assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
90e3414947f9500ec01f6672103f29d0{code}
This is good, but the user has to copy and format the command, which can get 
really big depending on how many regions need to be assigned.

_addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping 
region list into a file, which can be passed onto as input to _assigns_ command 
via -i parameter.

Sample expected use-case:

 
{code:java}
# Dump output of command (in a formatted manner) to file
hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f 
regions_to_assign.txt

# Pass file as input to assigns
hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27724) [HBCK2] addFsRegionsMissingInMeta command should support dumping region list into a file which can be passed as input to assigns command

2023-03-16 Thread Nihal Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-27724:
---
Description: 
_addFsRegionsMissingInMeta_ command currently outputs a command as last line of 
output which needs to be run with hbck2
{code:java}
assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
90e3414947f9500ec01f6672103f29d0{code}
This is good, but the user has to copy and format the command, which can get 
really big depending on how many regions need to be assigned.

_addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping 
region list into a file, which can be passed onto as input to _assigns_ command 
via -i parameter.

Sample expected use-case:
{code:java}
# Dump output of command (in a formatted manner) to file
hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f 
regions_to_assign.txt

# Pass file as input to assigns
hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
 

  was:
_addFsRegionsMissingInMeta_ command currently outputs a command as last line of 
output which needs to be run with hbck2
{code:java}
assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
90e3414947f9500ec01f6672103f29d0{code}
This is good, but the user has to copy and format the command, which can get 
really big depending on how many regions need to be assigned.

_addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping 
region list into a file, which can be passed onto as input to _assigns_ command 
via -i parameter.

Sample expected use-case:

 
{code:java}
# Dump output of command (in a formatted manner) to file
hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f 
regions_to_assign.txt

# Pass file as input to assigns
hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
 


> [HBCK2]  addFsRegionsMissingInMeta command should support dumping region list 
> into a file which can be passed as input to assigns command
> -
>
> Key: HBASE-27724
> URL: https://issues.apache.org/jira/browse/HBASE-27724
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase-operator-tools, hbck2
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>
> _addFsRegionsMissingInMeta_ command currently outputs a command as last line 
> of output which needs to be run with hbck2
> {code:java}
> assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 
> 5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 
> 7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 
> 90e3414947f9500ec01f6672103f29d0{code}
> This is good, but the user has to copy and format the command, which can get 
> really big depending on how many regions need to be assigned.
> _addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate 
> dumping region list into a file, which can be passed onto as input to 
> _assigns_ command via -i parameter.
> Sample expected use-case:
> {code:java}
> # Dump output of command (in a formatted manner) to file
> hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f 
> regions_to_assign.txt
> # Pass file as input to assigns
> hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] virajjasani commented on a diff in pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


virajjasani commented on code in PR #5109:
URL: https://github.com/apache/hbase/pull/5109#discussion_r1139276929


##
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java:
##
@@ -465,4 +465,18 @@ public ListMultimap run() throws 
Exception {
 return snapshot.toBuilder()
   
.setUsersAndPermissions(ShadedAccessControlUtil.toUserTablePermissions(perms)).build();
   }
+
+  /**
+   * Method to check whether TTL has expired for specified snapshot creation 
time and snapshot ttl.
+   * NOTE: For backward compatibility (after the patch deployment on HMaster), 
any snapshot with ttl
+   * 0 is to be considered as snapshot to keep FOREVER. Default ttl value 
specified by
+   * {@link HConstants.DEFAULT_SNAPSHOT_TTL}

Review Comment:
   I doubt if javadoc might recognize it, I think only `{@link 
HConstants#DEFAULT_SNAPSHOT_TTL}` will work mostly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] NihalJain commented on a diff in pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


NihalJain commented on code in PR #5109:
URL: https://github.com/apache/hbase/pull/5109#discussion_r1139269596


##
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotWithTTLFromClient.java:
##
@@ -0,0 +1,238 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
+import org.apache.hadoop.hbase.snapshot.SnapshotTTLExpiredException;
+import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test restore/clone snapshots with TTL from the client
+ */
+@Category({ LargeTests.class, ClientTests.class })
+public class TestSnapshotWithTTLFromClient {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestSnapshotWithTTLFromClient.class);
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(TestSnapshotWithTTLFromClient.class);
+
+  private static final HBaseTestingUtil UTIL = new HBaseTestingUtil();
+  private static final int NUM_RS = 2;
+  private static final String STRING_TABLE_NAME = "test";
+  private static final byte[] TEST_FAM = Bytes.toBytes("fam");
+  private static final TableName TABLE_NAME = 
TableName.valueOf(STRING_TABLE_NAME);
+  private static final TableName CLONED_TABLE_NAME = 
TableName.valueOf("clonedTable");
+  private static final String TTL_KEY = "TTL";
+  private static final int CHORE_INTERVAL_SECS = 30;
+
+  /**
+   * Setup the config for the cluster
+   * @throws Exception on failure
+   */
+  @BeforeClass
+  public static void setupCluster() throws Exception {
+setupConf(UTIL.getConfiguration());
+UTIL.startMiniCluster(NUM_RS);
+  }
+
+  protected static void setupConf(Configuration conf) {
+// Enable snapshot
+conf.setBoolean(SnapshotManager.HBASE_SNAPSHOT_ENABLED, true);
+
+// Set this to high value so that cleaner chore is not triggered
+conf.setInt("hbase.master.cleaner.snapshot.interval", CHORE_INTERVAL_SECS 
* 60 * 1000);
+  }
+
+  @Before
+  public void setup() throws Exception {
+createTable();
+  }
+
+  protected void createTable() throws Exception {
+UTIL.createTable(TABLE_NAME, new byte[][] { TEST_FAM });
+  }
+
+  @After
+  public void tearDown() throws Exception {
+UTIL.deleteTableIfAny(TABLE_NAME);
+UTIL.deleteTableIfAny(CLONED_TABLE_NAME);
+SnapshotTestingUtils.deleteAllSnapshots(UTIL.getAdmin());
+SnapshotTestingUtils.deleteArchiveDirectory(UTIL);
+  }
+
+  @AfterClass
+  public static void cleanupTest() throws Exception {
+try {
+  UTIL.shutdownMiniCluster();
+} catch (Exception e) {
+  LOG.warn("failure shutting down cluster", e);
+}
+  }
+
+  @Test
+  public void testRestoreSnapshotWithTTLSuccess() throws Exception {
+String snapshotName = "nonExpiredTTLRestoreSnapshotTest";
+
+// table should exist
+assertEquals(true, UTIL.getAdmin().tableExists(TABLE_NAME));
+
+// create snapshot fo given table with specified ttl
+createSnapshotWithTTL(TABLE_NAME, snapshotName, CHORE_INTERVAL_SECS * 2);
+Admin admin = UTIL.getAdmin();
+
+// Disable and drop table
+admin.disableTable(TABLE_NAME);
+admin.deleteTable(TABLE_NAME);
+assertEquals(false, UTIL.getAdmin().tableExists(TABLE_NAME));
+
+// restore snapshot
+

[jira] [Commented] (HBASE-27715) Refactoring the long tryAdvanceEntry method in WALEntryStream

2023-03-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701346#comment-17701346
 ] 

Hudson commented on HBASE-27715:


Results for branch master
[build #797 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Refactoring the long tryAdvanceEntry method in WALEntryStream
> -
>
> Key: HBASE-27715
> URL: https://issues.apache.org/jira/browse/HBASE-27715
> Project: HBase
>  Issue Type: Task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4
>
>
> Let's make it more readable and add more logs, for debugging.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27708) CPU hot-spot resolving User subject

2023-03-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701347#comment-17701347
 ] 

Hudson commented on HBASE-27708:


Results for branch master
[build #797 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/797/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CPU hot-spot resolving User subject
> ---
>
> Key: HBASE-27708
> URL: https://issues.apache.org/jira/browse/HBASE-27708
> Project: HBase
>  Issue Type: Bug
>  Components: Client, tracing
>Affects Versions: 2.5.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4
>
> Attachments: 27708.jpg
>
>
> Even with OpenTelemetry tracing disabled, we see contention related to 
> populating the string representation of the User principle on the client 
> side. Can HBase connection cache this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5113: Add supplement for com.aayushatharva.brotli4j:native-osx-aarch64

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5113:
URL: https://github.com/apache/hbase/pull/5113#issuecomment-1472587543

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 45s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  spotless  |   0m 42s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  11m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5113 |
   | Optional Tests | dupname asflicense javac spotless xml |
   | uname | Linux e39be7107dfd 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-resource-bundle U: hbase-resource-bundle |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5113: Add supplement for com.aayushatharva.brotli4j:native-osx-aarch64

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5113:
URL: https://github.com/apache/hbase/pull/5113#issuecomment-1472585156

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  branch-2 passed  |
   | +1 :green_heart: |  javadoc  |   0m  9s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m  7s |  hbase-resource-bundle in the patch 
passed.  |
   |  |   |   9m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5113 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux a9bdb4e0d13a 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 
13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/testReport/
 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-resource-bundle U: hbase-resource-bundle |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5113: Add supplement for com.aayushatharva.brotli4j:native-osx-aarch64

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5113:
URL: https://github.com/apache/hbase/pull/5113#issuecomment-1472583612

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 57s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  branch-2 passed  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m  9s |  hbase-resource-bundle in the patch 
passed.  |
   |  |   |   8m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5113 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux b4c2e12bb169 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d136c6d7c5 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/testReport/
 |
   | Max. process+thread count | 71 (vs. ulimit of 3) |
   | modules | C: hbase-resource-bundle U: hbase-resource-bundle |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5113/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-24762) Purge protobuf java 2.5.0 dependency

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701339#comment-17701339
 ] 

Frens Jan Rumph commented on HBASE-24762:
-

Would it still be desirable to remove the 2.5.0 dependency from the branch-2 
line? We've got some CVE (noise) on this. Would be great if we could remove it.

The dependency in hbase-protocol seems very limited (to 
{{{}o.a.h.h.util.ByteStringer{}}}) and covered by 
{{{}com.google.protobuf.UnsafeByteOperations{}}}. Or am I overlooking things 
here?

Would be happy to provide a PR.

> Purge protobuf java 2.5.0 dependency
> 
>
> Key: HBASE-24762
> URL: https://issues.apache.org/jira/browse/HBASE-24762
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> On master branch, we have removed the hbase-protocol module so in general, we 
> do not need to depend on protobuf 2.5.0 directl. Especially for hadoop 3.3.0, 
> hadoop will not depend on 2.5.0 any more, we should make sure hbase do not 
> introduce protobuf 2.5.0 too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27711) Regions permanently stuck in unknown_server state

2023-03-16 Thread Aaron Beitch (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701337#comment-17701337
 ] 

Aaron Beitch commented on HBASE-27711:
--

{quote}So you do the restarting work in a script. How do you decide whether 
your whole software stack recovers? Checking the connection to all of the 
service ports or?
{quote}
 
That's a good approximation of what it is doing. Each service can implement its 
own health check action.

The health check we have for HBase itself may be insufficient we have just 
discovered. Our intention for our testing is for all the region servers and 
master processes to be in a good state before we restart the next node, but 
that might not always be the case. We are checking on this now.

> Regions permanently stuck in unknown_server state
> -
>
> Key: HBASE-27711
> URL: https://issues.apache.org/jira/browse/HBASE-27711
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.4.11
> Environment: HBase: 2.4.11
> Hadoop: 3.2.4
> ZooKeeper: 3.7.1
>Reporter: Aaron Beitch
>Priority: Major
> Attachments: config.txt
>
>
> We see this log message and the regions listed are never put back into 
> service without manual intervention:
> {code:java}
> NodeC hbasemaster-0 hbasemaster 2023-02-15 14:15:56,149 WARN  
> [master/NodeC:16000.Chore.1] janitor.CatalogJanitor: 
> unknown_server=NodeA,16201,1676468874221/__test-table_NodeA__,,1672786676251.a3cac9159205d7611c85dd5c4feeded7.,
>  
> unknown_server=NodeA,16201,1676468874221/__test-table_NodeB__,,1672786676579.50e948f0a5bc962aabfe27e9ea4227a5.,
>  
> unknown_server=NodeA,16201,1676468874221/aeris_v2,,1672786736251.6ab0292cca294784bce8415cc69c30d4.,
>  
> unknown_server=NodeA,16201,1676468874221/aeris_v2,\x06,1672786736251.15d958805892370907a47f31a6e08db1.,
>  
> unknown_server=NodeA,16201,1676468874221/aeris_v2,\x12,1672786736251.ac3c78ff6903f52d9e2acf80b8436085.{code}
>  
> Normally when we see these unknown_server logs, they do get resolved by 
> reassigning the regions, however we have a reproducible case where this 
> doesn't happen.
> When this occurs we also see the following log messages related to the 
> regions:
> {code:java}
> NodeC hbasemaster-0 hbasemaster 2023-02-15 14:10:59,810 WARN  
> [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16000] 
> assignment.AssignmentManager: Reporting NodeC,16201,1676469549542 server does 
> not match state=OPEN, location=NodeA,16201,1676468874221, table=aeris_v2, 
> region=6ab0292cca294784bce8415cc69c30d4 (time since last update=3749ms); 
> closing…
> NodeC hbasemaster-0 hbasemaster 2023-02-15 14:11:00,323 WARN  
> [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16000] 
> assignment.AssignmentManager: No matching procedure found for 
> C,16201,1676469549542 transition on state=OPEN, 
> location=NodeA,16201,1676468874221, table=aeris_v2, 
> region=6ab0292cca294784bce8415cc69c30d4 to CLOSED
> {code}
>  
> This suggests that the master has a different mapping of region to region 
> server than is expected so it closes the region. We would expect that the 
> regions get assigned somewhere else and then reopened, but we are not seeing 
> that.
> This log message comes from here: 
> [https://github.com/apache/hbase/blob/branch-2.4/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java#L1292]
> The next thing that is done is calling AssignmentManager's 
> closeRegionServerSilently method.
> Our setup:
> We have a three server cluster that runs a full HBASE stack: 3 zookeeper 
> nodes, an HBASE master active and standby, 3 region servers, 3 HDFS data 
> nodes. For reliability testing we are running a script that will restart one 
> of the three servers, which will have running on it a region server, 
> zookeeper and HDFS process, and possibly also the HBASE master primary or 
> standby.
> In this test we saw the issue after NodeB had been killed at 14:08:33, which 
> had been running the active master, so the master did switchover to NodeC. 
> Then at 14:12:56 we saw a "STUCK Region-In-Transition" log for a region on 
> NodeA (this is another common reproducible issue we plan to open a ticket 
> for) and then restarted just the region server process on NodeA to get that 
> region reassigned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27723) Fix brotli4j licence issue on native-osx-aarch64

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701334#comment-17701334
 ] 

Frens Jan Rumph commented on HBASE-27723:
-

I've created a PR: [https://github.com/apache/hbase/pull/5113] for branch-2. 
I'm however not completely sure how you guys deal with the multiple version 
branches just yet. Could you guys get me going with this?

> Fix brotli4j licence issue on native-osx-aarch64
> 
>
> Key: HBASE-27723
> URL: https://issues.apache.org/jira/browse/HBASE-27723
> Project: HBase
>  Issue Type: Improvement
>Reporter: Frens Jan Rumph
>Priority: Major
>
> Apparently the licence of {{brotli4j}} is malformed and fixed in 
> {{{}supplemental-models.xml{}}}. It didn't however cover 
> {{native-osx-aarch64}} yet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] frensjan opened a new pull request, #5113: Add supplement for com.aayushatharva.brotli4j:native-osx-aarch64

2023-03-16 Thread via GitHub


frensjan opened a new pull request, #5113:
URL: https://github.com/apache/hbase/pull/5113

   Add supplement for com.aayushatharva.brotli4j:native-osx-aarch64 licence 
information. https://issues.apache.org/jira/browse/HBASE-27723


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-27723) Fix brotli4j licence issue on native-osx-aarch64

2023-03-16 Thread Frens Jan Rumph (Jira)
Frens Jan Rumph created HBASE-27723:
---

 Summary: Fix brotli4j licence issue on native-osx-aarch64
 Key: HBASE-27723
 URL: https://issues.apache.org/jira/browse/HBASE-27723
 Project: HBase
  Issue Type: Improvement
Reporter: Frens Jan Rumph


Apparently the licence of {{brotli4j}} is malformed and fixed in 
{{{}supplemental-models.xml{}}}. It didn't however cover {{native-osx-aarch64}} 
yet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] virajjasani commented on a diff in pull request #5109: HBASE-27671 Client should not be able to restore/clone a snapshot aft…

2023-03-16 Thread via GitHub


virajjasani commented on code in PR #5109:
URL: https://github.com/apache/hbase/pull/5109#discussion_r1139156610


##
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotWithTTLFromClient.java:
##
@@ -0,0 +1,238 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
+import org.apache.hadoop.hbase.snapshot.SnapshotTTLExpiredException;
+import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test restore/clone snapshots with TTL from the client
+ */
+@Category({ LargeTests.class, ClientTests.class })
+public class TestSnapshotWithTTLFromClient {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestSnapshotWithTTLFromClient.class);
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(TestSnapshotWithTTLFromClient.class);
+
+  private static final HBaseTestingUtil UTIL = new HBaseTestingUtil();
+  private static final int NUM_RS = 2;
+  private static final String STRING_TABLE_NAME = "test";
+  private static final byte[] TEST_FAM = Bytes.toBytes("fam");
+  private static final TableName TABLE_NAME = 
TableName.valueOf(STRING_TABLE_NAME);
+  private static final TableName CLONED_TABLE_NAME = 
TableName.valueOf("clonedTable");
+  private static final String TTL_KEY = "TTL";
+  private static final int CHORE_INTERVAL_SECS = 30;
+
+  /**
+   * Setup the config for the cluster
+   * @throws Exception on failure
+   */
+  @BeforeClass
+  public static void setupCluster() throws Exception {
+setupConf(UTIL.getConfiguration());
+UTIL.startMiniCluster(NUM_RS);
+  }
+
+  protected static void setupConf(Configuration conf) {
+// Enable snapshot
+conf.setBoolean(SnapshotManager.HBASE_SNAPSHOT_ENABLED, true);
+
+// Set this to high value so that cleaner chore is not triggered
+conf.setInt("hbase.master.cleaner.snapshot.interval", CHORE_INTERVAL_SECS 
* 60 * 1000);
+  }
+
+  @Before
+  public void setup() throws Exception {
+createTable();
+  }
+
+  protected void createTable() throws Exception {
+UTIL.createTable(TABLE_NAME, new byte[][] { TEST_FAM });
+  }
+
+  @After
+  public void tearDown() throws Exception {
+UTIL.deleteTableIfAny(TABLE_NAME);
+UTIL.deleteTableIfAny(CLONED_TABLE_NAME);
+SnapshotTestingUtils.deleteAllSnapshots(UTIL.getAdmin());
+SnapshotTestingUtils.deleteArchiveDirectory(UTIL);
+  }
+
+  @AfterClass
+  public static void cleanupTest() throws Exception {
+try {
+  UTIL.shutdownMiniCluster();
+} catch (Exception e) {
+  LOG.warn("failure shutting down cluster", e);
+}
+  }
+
+  @Test
+  public void testRestoreSnapshotWithTTLSuccess() throws Exception {
+String snapshotName = "nonExpiredTTLRestoreSnapshotTest";
+
+// table should exist
+assertEquals(true, UTIL.getAdmin().tableExists(TABLE_NAME));

Review Comment:
   nit: can be simplified with 
`assertTrue(UTIL.getAdmin().tableExists(TABLE_NAME))`



##
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java:
##
@@ -465,4 +465,18 @@ public ListMultimap run() throws 
Exception {
 return snapshot.toBuilder()
   

[GitHub] [hbase] Apache-HBase commented on pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5080:
URL: https://github.com/apache/hbase/pull/5080#issuecomment-147234

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  2s |  https://github.com/apache/hbase/pull/5080 
does not apply to master. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/5080 |
   | JIRA Issue | HBASE-27686 |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/6/console 
|
   | versions | git=2.25.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5080:
URL: https://github.com/apache/hbase/pull/5080#issuecomment-1472332492

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  3s |  https://github.com/apache/hbase/pull/5080 
does not apply to master. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/5080 |
   | JIRA Issue | HBASE-27686 |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/6/console 
|
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5080:
URL: https://github.com/apache/hbase/pull/5080#issuecomment-1472332351

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  3s |  https://github.com/apache/hbase/pull/5080 
does not apply to master. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/5080 |
   | JIRA Issue | HBASE-27686 |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/6/console 
|
   | versions | git=2.25.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #5104: HBASE-27710 ByteBuff ref counting is too expensive for on-heap buffers

2023-03-16 Thread via GitHub


bbeaudreault commented on PR #5104:
URL: https://github.com/apache/hbase/pull/5104#issuecomment-1472279584

   I downloaded `jol-core` and ran ClassLayout on RefCnt... On my platform, 24 
bytes without the boolean, 32 bytes with. Not insubstantial. Despite a boolean 
being just 1 byte, we lose 3 bytes on internal alignment and then another 4 
bytes on external/class alignment.
   
   So it's effectively like adding a long... I guess most ByteBufferAllocators 
are configured in the 10s of thousands, so not a huge issue there. RefCnt is 
also used in BucketCache where imagine this will only matter for very large 
bucket cache sizes? We give 75gb to bucket cache in some cases, which equals 
2-5M blocks. That'd be 40mb of space for us, which might be worth the 
performance tradeoff. If someone uses TB of file cache (i.e. when using object 
store like s3 for main storage), then it might be a lot more.
   
   This solution is equivalent in performance to my original memory-free 
solution for on-heap. The potential benefit is for off-heap, which I don't have 
performance numbers on. I might be inclined to keep my existing solution for 
now, until we can find any evidence for a regression in the off-heap case.
   
   Let me know if that changes your opinion at all before I merge this as-is.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Shanmukha Haripriya Kota (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701269#comment-17701269
 ] 

Shanmukha Haripriya Kota commented on HBASE-27686:
--

Thanks, [~wchevreuil] 

> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-27686.
--
Resolution: Fixed

Thanks for the contribution [~sk...@cloudera.com]! I have now merged this into 
master and branch-2.

> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-27686:
-
Release Note: This adds a background thread into RS process, that 
periodically checks if there were updates in the bucket cache. If the bucket 
cache has been updated since the last check, it saves the bucket cache index to 
the file path defined by "hbase.bucketcache.persistent.path", as well as the 
list of completed prefetched files into the path defined by 
"hbase.prefetch.file.list.path" property. The thread is named as 
"bucket-cache-persister", and the check interval is defined by the 
"hbase.bucketcache.persist.intervalinmillis" property, and it defaults to 1000 
(1 second). This thread is only enabled if "hbase.bucketcache.persistent.path" 
is set in the configuration.

> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-27686:
-
Release Note: This adds a background thread into the RS process, that 
periodically checks if there were updates in the bucket cache. If the bucket 
cache has been updated since the last check, it saves the bucket cache index to 
the file path defined by "hbase.bucketcache.persistent.path", as well as the 
list of completed prefetched files into the path defined by 
"hbase.prefetch.file.list.path" property. The thread is named as 
"bucket-cache-persister", and the check interval is defined by the 
"hbase.bucketcache.persist.intervalinmillis" property, and it defaults to 1000 
(1 second). This thread is only enabled if "hbase.bucketcache.persistent.path" 
is set in the configuration.  (was: This adds a background thread into RS 
process, that periodically checks if there were updates in the bucket cache. If 
the bucket cache has been updated since the last check, it saves the bucket 
cache index to the file path defined by "hbase.bucketcache.persistent.path", as 
well as the list of completed prefetched files into the path defined by 
"hbase.prefetch.file.list.path" property. The thread is named as 
"bucket-cache-persister", and the check interval is defined by the 
"hbase.bucketcache.persist.intervalinmillis" property, and it defaults to 1000 
(1 second). This thread is only enabled if "hbase.bucketcache.persistent.path" 
is set in the configuration.)

> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #5100: HBASE-27712 Remove unused params in region metrics

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5100:
URL: https://github.com/apache/hbase/pull/5100#issuecomment-1472130178

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 52s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 13s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 28s |  hbase-hadoop-compat in the patch 
passed.  |
   | -1 :x: |  unit  | 223m  3s |  hbase-server in the patch failed.  |
   |  |   | 246m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5100 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0b8bd93c836f 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/testReport/
 |
   | Max. process+thread count | 2825 (vs. ulimit of 3) |
   | modules | C: hbase-hadoop-compat hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-27722) Update documentation about how to enable encryption on WAL

2023-03-16 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27722:
-

 Summary: Update documentation about how to enable encryption on WAL
 Key: HBASE-27722
 URL: https://issues.apache.org/jira/browse/HBASE-27722
 Project: HBase
  Issue Type: Task
  Components: documentation, wal
Reporter: Duo Zhang
Assignee: Duo Zhang


After HBASE-27632 and HBASE-27702, we removed SecureProtobufLogReader and 
SecureProtobufLogWriter, users do not need to specify the reader/writer class 
any more, they just need to enable WAL encryption.

Remove the related configurations such as 'hbase.regionserver.hlog.writer.impl' 
and 'hbase.regionserver.hlog.reader.impl' in hbase-default.xml, and also change 
ref guide to tell users how to enable WAL encryption for 2.6+.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27686) Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-27686:
-
Description: 
HBASE-27313 introduced the ability to persist a list of hfiles for which 
prefetch has already been completed, so the we can avoid prefetching those 
files again in the event of a graceful restart, but it doesn't cover crash 
scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
saved. 

This change aims to persist the list of already prefetched from a background 
thread that periodically checks cache state and persists the list if updates 
have happened.

> Recovery of BucketCache and Prefetched data after RS Crash
> --
>
> Key: HBASE-27686
> URL: https://issues.apache.org/jira/browse/HBASE-27686
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Reporter: Shanmukha Haripriya Kota
>Assignee: Shanmukha Haripriya Kota
>Priority: Major
>
> HBASE-27313 introduced the ability to persist a list of hfiles for which 
> prefetch has already been completed, so the we can avoid prefetching those 
> files again in the event of a graceful restart, but it doesn't cover crash 
> scenarios, as if the RS is killed or abnormally stopped, the list wouldn't be 
> saved. 
> This change aims to persist the list of already prefetched from a background 
> thread that periodically checks cache state and persists the list if updates 
> have happened.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 commented on pull request #5104: HBASE-27710 ByteBuff ref counting is too expensive for on-heap buffers

2023-03-16 Thread via GitHub


Apache9 commented on PR #5104:
URL: https://github.com/apache/hbase/pull/5104#issuecomment-1472116040

   What you proposed is the trick in netty's CompositeByteBuf, where they 
introduce a freed flag to indicate whether the ByteBuf is still valid.
   
   And for AbstractReferenceCountedByteBuf, the code is like this
   ```
   @Override
   boolean isAccessible() {
   // Try to do non-volatile read for performance as the 
ensureAccessible() is racy anyway and only provide
   // a best-effort guard.
   return updater.isLiveNonVolatile(this);
   }
   ```
   
   But  seems we do not have access to the updater field so I think we could go 
with your current approach. The down side is we will add one more boolean for 
each ByteBuff but should be OK?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5100: HBASE-27712 Remove unused params in region metrics

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5100:
URL: https://github.com/apache/hbase/pull/5100#issuecomment-1472111716

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 17s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 30s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  hbase-hadoop-compat in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 211m 48s |  hbase-server in the patch passed.  
|
   |  |   | 236m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5100 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 18494cf034f7 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/testReport/
 |
   | Max. process+thread count | 2426 (vs. ulimit of 3) |
   | modules | C: hbase-hadoop-compat hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #5104: HBASE-27710 ByteBuff ref counting is too expensive for on-heap buffers

2023-03-16 Thread via GitHub


bbeaudreault commented on PR #5104:
URL: https://github.com/apache/hbase/pull/5104#issuecomment-1472088802

   @Apache9 thank you very much for the review. I had an idea this morning, and 
wonder if you have any opinion.
   
   Currently we do this:
   
   ```java
   protected void checkRefCount() {
 ObjectUtil.checkPositive(refCnt(), REFERENCE_COUNT_NAME);
   }
   ```
   
   Calling `refCnt()` goes down the expensive path of getting the real refCnt 
numeric value. 
   
   I think what we really care about is "has this buffer been recycled". In 
which case, what if we added a volatile boolean to our RefCnt class which gets 
set to true when the Recycler is called? We don't care about synchronization 
since it always goes from false to true. The above method could become:
   
   ```java
   //
   // in RefCnt.java
   //
   private volatile boolean recycled;
   
   public boolean isRecycled() {
   return recycled;
   }
   
   @Override
   protected final void deallocate() {
   this.recycler.free();
   this.recycled = true; // of note
   if (leak != null) {
 this.leak.close(this);
   }
   }
   
   //
   // In ByteBuff.java
   //
   protected void checkRefCount() {
   Preconditions.checkState(!refCnt.isRecycled(), "ByteBuff has been 
recycled");
   }
   ```
   
   Of course we'd also rename the method. I plugged this into our test case and 
it performs similarly to this PR. The benefit of this approach is it might also 
speed up off-heap usages while still providing protection.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] wchevreuil merged pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


wchevreuil merged PR #5080:
URL: https://github.com/apache/hbase/pull/5080


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5096: HBASE-27702 Remove 'hbase.regionserver.hlog.writer.impl' config

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5096:
URL: https://github.com/apache/hbase/pull/5096#issuecomment-1471929325

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m 55s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  8s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 17s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 262m 46s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  unit  |   1m  5s |  hbase-it in the patch passed.  |
   |  |   | 290m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5096 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 78111e90db15 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/testReport/
 |
   | Max. process+thread count | 2614 (vs. ulimit of 3) |
   | modules | C: hbase-server hbase-it U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27696) [hbase-operator-tools] Use $revision as placeholder for maven version

2023-03-16 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-27696:
-
Fix Version/s: hbase-operator-tools-1.3.0

> [hbase-operator-tools] Use $revision as placeholder for maven version
> -
>
> Key: HBASE-27696
> URL: https://issues.apache.org/jira/browse/HBASE-27696
> Project: HBase
>  Issue Type: Task
>  Components: build, pom
>Affects Versions: hbase-operator-tools-1.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> To align with our main repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27696) [hbase-operator-tools] Use $revision as placeholder for maven version

2023-03-16 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-27696:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [hbase-operator-tools] Use $revision as placeholder for maven version
> -
>
> Key: HBASE-27696
> URL: https://issues.apache.org/jira/browse/HBASE-27696
> Project: HBase
>  Issue Type: Task
>  Components: build, pom
>Affects Versions: hbase-operator-tools-1.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: hbase-operator-tools-1.3.0
>
>
> To align with our main repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase-operator-tools] ndimiduk commented on pull request #112: HBASE-27696 [hbase-operator-tools] Use $revision as placeholder for maven version

2023-03-16 Thread via GitHub


ndimiduk commented on PR #112:
URL: 
https://github.com/apache/hbase-operator-tools/pull/112#issuecomment-1471916101

   Thanks for the reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-operator-tools] ndimiduk merged pull request #112: HBASE-27696 [hbase-operator-tools] Use $revision as placeholder for maven version

2023-03-16 Thread via GitHub


ndimiduk merged PR #112:
URL: https://github.com/apache/hbase-operator-tools/pull/112


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27708) CPU hot-spot resolving User subject

2023-03-16 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-27708:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review.

> CPU hot-spot resolving User subject
> ---
>
> Key: HBASE-27708
> URL: https://issues.apache.org/jira/browse/HBASE-27708
> Project: HBase
>  Issue Type: Bug
>  Components: Client, tracing
>Affects Versions: 2.5.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4
>
> Attachments: 27708.jpg
>
>
> Even with OpenTelemetry tracing disabled, we see contention related to 
> populating the string representation of the User principle on the client 
> side. Can HBase connection cache this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27708) CPU hot-spot resolving User subject

2023-03-16 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-27708:
-
Fix Version/s: 2.6.0
   2.5.4

> CPU hot-spot resolving User subject
> ---
>
> Key: HBASE-27708
> URL: https://issues.apache.org/jira/browse/HBASE-27708
> Project: HBase
>  Issue Type: Bug
>  Components: Client, tracing
>Affects Versions: 2.5.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.4
>
> Attachments: 27708.jpg
>
>
> Even with OpenTelemetry tracing disabled, we see contention related to 
> populating the string representation of the User principle on the client 
> side. Can HBase connection cache this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] ndimiduk merged pull request #5112: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2.5

2023-03-16 Thread via GitHub


ndimiduk merged PR #5112:
URL: https://github.com/apache/hbase/pull/5112


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ndimiduk merged pull request #5111: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2

2023-03-16 Thread via GitHub


ndimiduk merged PR #5111:
URL: https://github.com/apache/hbase/pull/5111


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701154#comment-17701154
 ] 

Frens Jan Rumph commented on HBASE-26734:
-

Thanks guys for the feedback!

We're using 2.5.3-hadoop3 indeed. We'd prefer using off the shelf distributions 
of Hadoop and HBase. We have been with Hadoop 3.3.0 and HBase 2.2.4 but this 
situation isn't maintainable.

They way we have been using this combination is through installing both HBase 
and Hadoop from .tar.gz distributions from Apache mirrors and then configuring 
{{export HADOOP_HOME=/usr/lib/hadoop/current}} in {{{}hbase-env.sh{}}}. Then 
the {{/bin/hbase}} script changes the class path to include classes from Hadoop.

Ideally, we wouldn't mix these class paths, but
 # as we are using Zstandard compression we seem to be blocked by 
https://issues.apache.org/jira/browse/HBASE-27706
 # there are CVE's detected in (dependencies of) Hadoop (also 3.3.4 / trunk), 
so we would like to use the latest version. That's not going to solve all our 
problems, but it's at least a step in the right direction.

Is there a particular reason for building HBase against an older Hadoop version?

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         

[GitHub] [hbase] Apache-HBase commented on pull request #5096: HBASE-27702 Remove 'hbase.regionserver.hlog.writer.impl' config

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5096:
URL: https://github.com/apache/hbase/pull/5096#issuecomment-1471885100

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 50s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 231m 37s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-it in the patch passed.  |
   |  |   | 259m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5096 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7e590b8082dd 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 
13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/testReport/
 |
   | Max. process+thread count | 2609 (vs. ulimit of 3) |
   | modules | C: hbase-server hbase-it U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5100: HBASE-27712 Remove unused params in region metrics

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5100:
URL: https://github.com/apache/hbase/pull/5100#issuecomment-1471777471

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 38s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 39s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 53s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 37s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5100 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux a5478d192864 5.4.0-1093-aws #102~18.04.2-Ubuntu SMP Wed Dec 
7 00:31:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 84 (vs. ulimit of 3) |
   | modules | C: hbase-hadoop-compat hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5100/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Robin Roy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701092#comment-17701092
 ] 

Robin Roy edited comment on HBASE-26734 at 3/16/23 11:13 AM:
-

Haven't observed this issue in my setup. I have built HBase 2.4.11 with 
hadoop-3.3.4 and it seems to be working fine. So I think compiling hbase with 
the required hadoop version is the right way to proceed


was (Author: robin7roy):
Haven't observed this issue in my setup. I have built HBase 2.4.11 with 
hadoop-3.3.4 and it seems to be working fine. 

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653)
>         at 
> 

[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Robin Roy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701092#comment-17701092
 ] 

Robin Roy commented on HBASE-26734:
---

Haven't observed this issue in my setup. I have built HBase 2.4.11 with 
hadoop-3.3.4 and it seems to be working fine. 

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:465)
>         at 
> 

[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701071#comment-17701071
 ] 

Duo Zhang commented on HBASE-26734:
---

HBase 2.5.3 is not built with hadoop 3.3.4, so how do you use hadoop 3.3.4 with 
HBase 2.5.3? Replace the jars directly? What is the base binary you use? 2.5.3 
or 2.5.3-hadoop3? Thanks.

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:465)
>        

[GitHub] [hbase] Apache-HBase commented on pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5080:
URL: https://github.com/apache/hbase/pull/5080#issuecomment-1471652741

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 32s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 51s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 46s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 214m 10s |  hbase-server in the patch passed.  
|
   |  |   | 239m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5080 |
   | JIRA Issue | HBASE-27686 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a0600e238c25 5.4.0-1093-aws #102~18.04.2-Ubuntu SMP Wed Dec 
7 00:31:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / b0cfd74edd |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/testReport/
 |
   | Max. process+thread count | 2669 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5080: HBASE-27686: Recovery of BucketCache and Prefetched data after RS Crash

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5080:
URL: https://github.com/apache/hbase/pull/5080#issuecomment-1471632737

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  8s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 202m 44s |  hbase-server in the patch passed.  
|
   |  |   | 227m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5080 |
   | JIRA Issue | HBASE-27686 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c24571e69dec 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / b0cfd74edd |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/testReport/
 |
   | Max. process+thread count | 2469 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5080/5/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4966: HBASE-27216 Revisit the ReplicationSyncUp tool

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #4966:
URL: https://github.com/apache/hbase/pull/4966#issuecomment-1471607120

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 19s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27109/table_based_rqs Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 37s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  compile  |   2m 13s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  HBASE-27109/table_based_rqs 
passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 35s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 26s |  hbase-replication in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 244m  8s |  hbase-server in the patch passed.  
|
   |  |   | 283m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4966 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 22afce73fe53 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 
13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27109/table_based_rqs / c1d126dd07 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/testReport/
 |
   | Max. process+thread count | 3085 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-replication 
hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/console
 |
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701058#comment-17701058
 ] 

Frens Jan Rumph edited comment on HBASE-26734 at 3/16/23 9:34 AM:
--

It can be reproduced fairly easily. The setup I've created for HBASE-27706 in 
[https://github.com/frensjan/HBASE-27706] is a good starting point. If this is 
removed from hbase-site.xml the error occurs:
{code:java}

hbase.wal.provider
filesystem
{code}
I can create a separate dedicated repo to directly reproduce if need be.


was (Author: frensjan):
It can be reproduced fairly easily. The setup I've created for HBASE-27706 in 
[https://github.com/frensjan/HBASE-27706] is a good starting point. If this is 
removed from hbase-site.xml the error occurs:
{code:java}

hbase.wal.provider
filesystem
 {code}

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> 

[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701058#comment-17701058
 ] 

Frens Jan Rumph commented on HBASE-26734:
-

It can be reproduced fairly easily. The setup I've created for HBASE-27706 in 
[https://github.com/frensjan/HBASE-27706] is a good starting point. If this is 
removed from hbase-site.xml the error occurs:
{code:java}

hbase.wal.provider
filesystem
 {code}

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529)
>         at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #4966: HBASE-27216 Revisit the ReplicationSyncUp tool

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #4966:
URL: https://github.com/apache/hbase/pull/4966#issuecomment-1471600144

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27109/table_based_rqs Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  compile  |   2m 21s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  shadedjars  |   5m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  HBASE-27109/table_based_rqs 
passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 39s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 38s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 30s |  hbase-replication in the patch 
passed.  |
   | -1 :x: |  unit  | 238m  2s |  hbase-server in the patch failed.  |
   |  |   | 279m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4966 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 336e90260658 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27109/table_based_rqs / c1d126dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/testReport/
 |
   | Max. process+thread count | 5762 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-replication 
hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4966/16/console
 |
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701056#comment-17701056
 ] 

Frens Jan Rumph commented on HBASE-26734:
-

I'm not 100% sure, but I think I see the same issue in the combination of HBase 
2.5.3 and Hadoop 3.3.4.

I see this appearing in the HBase master logs:
{code:java}
hbase-master-1  | 2023-03-16 09:25:28,453 ERROR [RS-EventLoopGroup-1-6] 
util.NettyFutureUtils (NettyFutureUtils.java:lambda$addListener$0(58)) - 
Unexpected error caught when processing netty
hbase-master-1  | java.lang.IllegalArgumentException: object is not an instance 
of declaring class
hbase-master-1  |     at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
hbase-master-1  |     at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
hbase-master-1  |     at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
hbase-master-1  |     at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:64)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:348)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:120)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:430)
hbase-master-1  |     at 
org.apache.hadoop.hbase.util.NettyFutureUtils.lambda$addListener$0(NettyFutureUtils.java:56)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:35)
hbase-master-1  |     at 
org.apache.hadoop.hbase.util.NettyFutureUtils.addListener(NettyFutureUtils.java:52)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:424)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:120)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:482)
hbase-master-1  |     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
hbase-master-1  |     at 
org.apache.hadoop.hbase.util.NettyFutureUtils.lambda$addListener$0(NettyFutureUtils.java:56)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691)
hbase-master-1  |     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
hbase-master-1  |     at 

[GitHub] [hbase] thangTang commented on a diff in pull request #5107: HBASE-27713 Remove numRegions in Region Metrics

2023-03-16 Thread via GitHub


thangTang commented on code in PR #5107:
URL: https://github.com/apache/hbase/pull/5107#discussion_r1138355039


##
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionAggregateSourceImpl.java:
##
@@ -99,7 +98,6 @@ public void getMetrics(MetricsCollector collector, boolean 
all) {
   ((MetricsRegionSourceImpl) regionMetricSource).snapshot(mrb, all);
 }
   }
-  mrb.addGauge(Interns.info(NUM_REGIONS, NUMBER_OF_REGIONS_DESC), 
regionSources.size());

Review Comment:
   > Have you checked when we add this metrics? What is it used for?
   
   Yes. This metric was introduced in 
[HBASE-14166](https://issues.apache.org/jira/browse/HBASE-14166).
   This ticket aims to solve the problem that some region metrics cannot be 
displayed, but I didn't find any description about this metric.
   
   My guess is that the author hoped to use it to judge whether the number of 
region metrics met expectations.
   
   Any way, I think our code does not depend on it internally, and it is more 
appropriate for users to use the `regionCount` of the RS dimension.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #5106: HBASE-27718 The regionStateNode only need remove once in regionOffline

2023-03-16 Thread via GitHub


Apache9 commented on PR #5106:
URL: https://github.com/apache/hbase/pull/5106#issuecomment-1471581431

   It was like this in the beginning, when introduced in HBASE-14614, for me I 
do not think we need this extra line to remove it from offlineRegions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27706) Possible Zstd incompatibility

2023-03-16 Thread Frens Jan Rumph (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17701047#comment-17701047
 ] 

Frens Jan Rumph commented on HBASE-27706:
-

Thanks [~apurtell] for the feedback. The incompatibility should ideally be 
documented indeed.

Is there a particular reason for the differences between the Hadoop and HBase 
codecs? Why can't the HBase compression codec manage the underlying zstd stream?

If it is at all within the realm of possibilities: would compatibility be a 
desired feature?

> Possible Zstd incompatibility
> -
>
> Key: HBASE-27706
> URL: https://issues.apache.org/jira/browse/HBASE-27706
> Project: HBase
>  Issue Type: Bug
>  Components: compatibility
>Affects Versions: 2.5.3
>Reporter: Frens Jan Rumph
>Priority: Major
>
>  
> We're in the process of upgrading a HBase installation from 2.2.4 to 2.5.3. 
> We're currently using Zstd compression from our Hadoop installation. Due to 
> some other class path issues (Netty issues in relation to the async WAL 
> provider), we would like to remove Hadoop from the class path.
> However, using the Zstd compression from HBase (which uses 
> [https://github.com/luben/zstd-jni]) we seem to hit some incompatibility. 
> When restarting a node to use this implementation we had errors like the 
> following:
> {code:java}
> 2023-03-10 16:33:01,925 WARN  [RS_OPEN_REGION-regionserver/n2:16020-0] 
> handler.AssignRegionHandler: Failed to open region 
> NAMESPACE:TABLE,,1673888962751.cdb726dad4eaabf765969f195e91c737., will report 
> to master
> java.io.IOException: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1148)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1091)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:994)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:941)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7228)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7183)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7159)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7118)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7074)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:147)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:100)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading data 
> index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.openStoreFiles(StoreEngine.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.initialize(StoreEngine.java:338)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:297)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6359)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1114)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> ... 3 more
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading data index and meta index from file 
> hdfs://CLUSTER/hbase/data/NAMESPACE/TABLE/cdb726dad4eaabf765969f195e91c737/e/aea6eddaa8ee476197d064a4b4c345b9
> at 
> org.apache.hadoop.hbase.io.hfile.HFileInfo.initMetaAndIndex(HFileInfo.java:392)
> at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:394)
> at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:518)
> at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #5111: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5111:
URL: https://github.com/apache/hbase/pull/5111#issuecomment-1471564588

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 53s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 59s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 55s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 42s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  40m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5111 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux a63d0ae09a35 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dbb78388e5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 85 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5096: HBASE-27702 Remove 'hbase.regionserver.hlog.writer.impl' config

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5096:
URL: https://github.com/apache/hbase/pull/5096#issuecomment-1471560815

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 47s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 42s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 15s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 20s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  44m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5096 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 08e84b869df4 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 22b0c3e2bd |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 85 (vs. ulimit of 3) |
   | modules | C: hbase-server hbase-it U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5096/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-operator-tools] Apache-HBase commented on pull request #112: HBASE-27696 [hbase-operator-tools] Use $revision as placeholder for maven version

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #112:
URL: 
https://github.com/apache/hbase-operator-tools/pull/112#issuecomment-1471561006

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  hadolint  |   0m  0s |  hadolint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  8s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  4s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m  7s |  hbase-table-reporter in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   6m  1s |  hbase-hbck2 in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 32s |  hbase-tools in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m  4s |  hbase-operator-tools-assembly in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 24s |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  21m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-112/7/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-operator-tools/pull/112 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs javac 
javadoc unit xml compile |
   | uname | Linux 31ab2fa60b98 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 9e57f86 |
   | Default Java | Oracle Corporation-1.8.0_342-b07 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-112/7/testReport/
 |
   | Max. process+thread count | 1252 (vs. ulimit of 5000) |
   | modules | C: hbase-table-reporter hbase-hbck2 hbase-tools 
hbase-operator-tools-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/job/PR-112/7/console
 |
   | versions | git=2.30.2 maven=3.8.6 shellcheck=0.7.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5112: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2.5

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5112:
URL: https://github.com/apache/hbase/pull/5112#issuecomment-1471552953

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  branch-2.5 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  branch-2.5 passed  |
   | +1 :green_heart: |  spotless  |   0m 40s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 28s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 29s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 37s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  32m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5112 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux e0cad45906c9 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / d151af1663 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 82 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5112: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2.5

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5112:
URL: https://github.com/apache/hbase/pull/5112#issuecomment-1471549311

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 26s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 21s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 58s |  hbase-common in the patch passed.  
|
   |  |   |  29m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5112 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 86f132889c55 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / d151af1663 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/testReport/
 |
   | Max. process+thread count | 254 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5111: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5111:
URL: https://github.com/apache/hbase/pull/5111#issuecomment-1471547590

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   5m 49s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  hbase-common in the patch passed.  
|
   |  |   |  28m 40s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5111 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux caf6a64cba9c 5.4.0-137-generic #154-Ubuntu SMP Thu Jan 5 
17:03:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dbb78388e5 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/testReport/
 |
   | Max. process+thread count | 267 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5111: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5111:
URL: https://github.com/apache/hbase/pull/5111#issuecomment-1471541134

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   5m 19s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  0s |  hbase-common in the patch passed.  
|
   |  |   |  23m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5111 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0290cf31eb62 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 
14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / dbb78388e5 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/testReport/
 |
   | Max. process+thread count | 246 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5111/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #5112: Backport "HBASE-27708 CPU hot-spot resolving User subject" to branch-2.5

2023-03-16 Thread via GitHub


Apache-HBase commented on PR #5112:
URL: https://github.com/apache/hbase/pull/5112#issuecomment-1471536985

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 21s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 21s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 22s |  hbase-common in the patch passed.  
|
   |  |   |  20m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5112 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8424be43a768 5.4.0-1094-aws #102~18.04.1-Ubuntu SMP Tue Jan 
10 21:07:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / d151af1663 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/testReport/
 |
   | Max. process+thread count | 165 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5112/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >