[jira] [Commented] (HBASE-28504) Implement eviction logic for scanners in Rest APIs to prevent scanner leakage

2024-04-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835143#comment-17835143
 ] 

Istvan Toth commented on HBASE-28504:
-

This was originally reported by [~ankit].

> Implement eviction logic for scanners in Rest APIs to prevent scanner leakage
> -
>
> Key: HBASE-28504
> URL: https://issues.apache.org/jira/browse/HBASE-28504
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST API maintains a map of _ScannerInstanceResource_s (which are 
> ultimately tracking Scanner objects).
> The user is supposed to delete these after using them, but if for any reason 
> it does not, then these objects are maintained indefinitely.
> Implement logic to evict old scanners automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28504) Implement eviction logic for scanners in Rest APIs to prevent scanner leakage

2024-04-08 Thread Istvan Toth (Jira)
Istvan Toth created HBASE-28504:
---

 Summary: Implement eviction logic for scanners in Rest APIs to 
prevent scanner leakage
 Key: HBASE-28504
 URL: https://issues.apache.org/jira/browse/HBASE-28504
 Project: HBase
  Issue Type: Improvement
  Components: REST
Reporter: Istvan Toth
Assignee: Istvan Toth


The REST API maintains a map of _ScannerInstanceResource_s (which are 
ultimately tracking Scanner objects).

The user is supposed to delete these after using them, but if for any reason it 
does not, then these objects are maintained indefinitely.

Implement logic to evict old scanners automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28405 - Fix failed procedure rollback when region was not close… [hbase]

2024-04-08 Thread via GitHub


mnpoonia commented on PR #5799:
URL: https://github.com/apache/hbase/pull/5799#issuecomment-2044165443

   > The latest fix LGTM.
   > 
   > Could we add a UT for this case?
   
   Trying to write a test. Little tricky as i haven't looked at 
ProcedureTestingUtility as of now. Will see what i can come up with


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28493 [hbase-thirdparty] Bump protobuf version [hbase-thirdparty]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #117:
URL: https://github.com/apache/hbase-thirdparty/pull/117#issuecomment-2044132631

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  6s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 10s |  hbase-shaded-protobuf in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   5m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-117/1/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/117 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile |
   | uname | Linux ad424a5fdee1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 59dd9e3 |
   | Default Java | Temurin-1.8.0_402-b06 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-117/1/testReport/
 |
   | Max. process+thread count | 370 (vs. ulimit of 1000) |
   | modules | C: hbase-shaded-protobuf . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-117/1/console 
|
   | versions | git=2.34.1 maven=3.9.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-08 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28493:
--
Release Note: Bump protobuf-java from 3.25.2 to 4.26.1.

> [hbase-thirdparty] Bump protobuf version
> 
>
> Key: HBASE-28493
> URL: https://issues.apache.org/jira/browse/HBASE-28493
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-08 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-28493 started by Duo Zhang.
-
> [hbase-thirdparty] Bump protobuf version
> 
>
> Key: HBASE-28493
> URL: https://issues.apache.org/jira/browse/HBASE-28493
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-08 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-28493:
-

Assignee: Duo Zhang

> [hbase-thirdparty] Bump protobuf version
> 
>
> Key: HBASE-28493
> URL: https://issues.apache.org/jira/browse/HBASE-28493
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-26192) Master UI hbck should provide a JSON formatted output option

2024-04-08 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-26192:

Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Master UI hbck should provide a JSON formatted output option
> 
>
> Key: HBASE-26192
> URL: https://issues.apache.org/jira/browse/HBASE-26192
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Kyle Purtell
>Assignee: Mihir Monani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1, 2.5.9
>
> Attachments: HBCK Report in JSON Format.png, Screen Shot 2022-05-31 
> at 5.18.15 PM.png
>
>
> It used to be possible to get hbck's verdict of cluster status from the 
> command line, especially useful for headless deployments, i.e. without 
> requiring a browser with sufficient connectivity to load a UI, or scrape 
> information out of raw HTML, or write regex to comb over log4j output. The 
> hbck tool's output wasn't particularly convenient to parse but it was 
> straightforward to extract the desired information with a handful of regular 
> expressions. 
> HBCK2 has a different design philosophy than the old hbck, which is to serve 
> as a collection of small and discrete recovery and repair functions, rather 
> than attempt to be a universal repair tool. This makes a lot of sense and 
> isn't the issue at hand. Unfortunately the old hbck's utility for reporting 
> the current cluster health assessment has not been replaced either in whole 
> or in part. Instead:
> {quote}
> HBCK2 is for fixes. For listings of inconsistencies or blockages in the 
> running cluster, you go elsewhere, to the logs and UI of the running cluster 
> Master. Once an issue has been identified, you use the HBCK2 tool to ask the 
> Master to effect fixes or to skip-over bad state. Asking the Master to make 
> the fixes rather than try and effect the repair locally in a fix-it tool's 
> context is another important difference between HBCK2 and hbck1. 
> {quote}
> Developing custom tooling to mine logs and scrape UI simply to gain a top 
> level assessment of system health is unsatisfying. There should be a 
> convenient means for querying the system if issues that rise to the level of 
> _inconsistency_, in the hbck parlance, are believed to be present. It would 
> be relatively simple to bring back the experience of invoking a command line 
> tool to deliver a verdict. This could be added to the hbck2 tool itself but 
> given that hbase-operator-tools is a separate project an intrinsic solution 
> is desirable. 
> An option that immediately comes to mind is modification of the Master's 
> hbck.jsp page to provide a JSON formatted output option if the HTTP Accept 
> header asks for text/json. However, looking at the source of hbck.jsp, it 
> makes more sense to leave it as is and implement a convenient machine 
> parseable output format elsewhere. This can be trivially accomplished with a 
> new servlet. Like hbck.jsp the servlet implementation would get a reference 
> to HbckChore and present the information this class makes available via its 
> various getters.  
> The machine parseable output is sufficient to enable headless hbck status 
> checking but it still would be nice if we could provide operators a command 
> line tool that formats the information for convenient viewing in a terminal. 
> That part could be implemented in the hbck2 tool after this proposal is 
> implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-26192 Master UI hbck should provide a JSON formatted output option [hbase]

2024-04-08 Thread via GitHub


apurtell merged PR #5780:
URL: https://github.com/apache/hbase/pull/5780


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28481) Prompting table already exists after failing to create table with many region replications

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835060#comment-17835060
 ] 

Hudson commented on HBASE-28481:


Results for branch branch-3
[build #181 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Prompting table already exists after failing to create table with many region 
> replications
> --
>
> Key: HBASE-28481
> URL: https://issues.apache.org/jira/browse/HBASE-28481
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.13
> Environment: Centos
>Reporter: guluo
>Assignee: guluo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> Reproduction steps:
> {code:java}
> # Create table with 65537 region replications 
> # we would get errors as follow,  this step is no problem 
> hbase:005:0> create 't01', 'info', {REGION_REPLICATION => 65537} 
> ERROR: java.lang.IllegalArgumentException: ReplicaId cannot be greater 
> than65535 
> For usage try 'help "create"' 
> Took 0.7590 seconds{code}
> {code:java}
> # list, and found the table does not exist, as follow 
> hbase:006:0> list TABLE 
> 0 row(s) Took 0.0100 seconds 
> => []{code}
> {code:java}
> # we create this tale agin by the correct way 
> # we would get message that this table already exists 
> hbase:007:0> create 't01', 'info' 
> ERROR: Table already exists: t01! 
> For usage try 'help "create"' 
> Took 0.1210 seconds {code}
>  
> Reason:
> In the CreateTableProcedure, we update this table descriptor into HBase 
> cluster at stage  CREATE_TABLE_WRITE_FS_LAYOUT
>  
> {code:java}
> env.getMasterServices().getTableDescriptors().update(tableDescriptor, true); 
> {code}
>  
> and then, we check if the Region Replication Count is legal at stage 
> CREATE_TABLE_ADD_TO_META.
>  
>  
> {code:java}
> newRegions = addTableToMeta(env, tableDescriptor, newRegions);
> // MutableRegionInfo.checkReplicaId 
> private static int checkReplicaId(int regionId) {     
>   if (regionId > MAX_REPLICA_ID) {         
> throw new IllegalArgumentException("ReplicaId cannot be greater than" + 
>  MAX_REPLICA_ID);    
>}     
> return regionId;
> }{code}
>  
>  
> So, we can not create the same name table by correct way after faling to 
> create table with many region replications (exceed 65536), because the table 
> descriptor has been updated into cluster and there is no rollback.
> So i think we can check if the region replication count at stage 
> CREATE_TABLE_PRE_OPERATION to avoid this problem



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28478) Remove the hbase1 compatible code in FixedFileTrailer

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835061#comment-17835061
 ] 

Hudson commented on HBASE-28478:


Results for branch branch-3
[build #181 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the hbase1 compatible code in FixedFileTrailer
> -
>
> Key: HBASE-28478
> URL: https://issues.apache.org/jira/browse/HBASE-28478
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0-beta-2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28183) It's impossible to re-enable the quota table if it gets disabled

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835062#comment-17835062
 ] 

Hudson commented on HBASE-28183:


Results for branch branch-3
[build #181 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> It's impossible to re-enable the quota table if it gets disabled
> 
>
> Key: HBASE-28183
> URL: https://issues.apache.org/jira/browse/HBASE-28183
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Chandra Sekhar K
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> HMaster.enableTable tries to read the quota table. If you disable the quota 
> table, this fails. So then it's impossible to re-enable it. The only solution 
> I can find is to delete the table at this point, so that it gets recreated at 
> startup, but this results in losing any quotas you had defined.  We should 
> fix enableTable to not check quotas if the table in question is hbase:quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28457) Introduce a version field in file based tracker record

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835059#comment-17835059
 ] 

Hudson commented on HBASE-28457:


Results for branch branch-3
[build #181 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/181/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce a version field in file based tracker record
> --
>
> Key: HBASE-28457
> URL: https://issues.apache.org/jira/browse/HBASE-28457
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> Per the discussion around HBASE-27826 and the related design doc, we all 
> agree that we should add version field to store file tracker, so when 
> downgrading, we will know that we will miss something when reading a tracker 
> file with higher version and fail the initialization, instead of ignore it 
> silently and may cause data loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-26192 Master UI hbck should provide a JSON formatted output option [hbase]

2024-04-08 Thread via GitHub


apurtell merged PR #5772:
URL: https://github.com/apache/hbase/pull/5772


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-26192 Master UI hbck should provide a JSON formatted output option [hbase]

2024-04-08 Thread via GitHub


apurtell commented on PR #5780:
URL: https://github.com/apache/hbase/pull/5780#issuecomment-2043644026

   No further comments, going to merge. Thank you @mihir6692 for the 
contribution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-26192 Master UI hbck should provide a JSON formatted output option [hbase]

2024-04-08 Thread via GitHub


apurtell commented on PR #5772:
URL: https://github.com/apache/hbase/pull/5772#issuecomment-2043644217

   No further comments, going to merge. Thank you @mihir6692 for the 
contribution


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28183) It's impossible to re-enable the quota table if it gets disabled

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835039#comment-17835039
 ] 

Hudson commented on HBASE-28183:


Results for branch branch-2.5
[build #506 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> It's impossible to re-enable the quota table if it gets disabled
> 
>
> Key: HBASE-28183
> URL: https://issues.apache.org/jira/browse/HBASE-28183
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Chandra Sekhar K
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> HMaster.enableTable tries to read the quota table. If you disable the quota 
> table, this fails. So then it's impossible to re-enable it. The only solution 
> I can find is to delete the table at this point, so that it gets recreated at 
> startup, but this results in losing any quotas you had defined.  We should 
> fix enableTable to not check quotas if the table in question is hbase:quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28481) Prompting table already exists after failing to create table with many region replications

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835038#comment-17835038
 ] 

Hudson commented on HBASE-28481:


Results for branch branch-2.5
[build #506 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Prompting table already exists after failing to create table with many region 
> replications
> --
>
> Key: HBASE-28481
> URL: https://issues.apache.org/jira/browse/HBASE-28481
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.13
> Environment: Centos
>Reporter: guluo
>Assignee: guluo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> Reproduction steps:
> {code:java}
> # Create table with 65537 region replications 
> # we would get errors as follow,  this step is no problem 
> hbase:005:0> create 't01', 'info', {REGION_REPLICATION => 65537} 
> ERROR: java.lang.IllegalArgumentException: ReplicaId cannot be greater 
> than65535 
> For usage try 'help "create"' 
> Took 0.7590 seconds{code}
> {code:java}
> # list, and found the table does not exist, as follow 
> hbase:006:0> list TABLE 
> 0 row(s) Took 0.0100 seconds 
> => []{code}
> {code:java}
> # we create this tale agin by the correct way 
> # we would get message that this table already exists 
> hbase:007:0> create 't01', 'info' 
> ERROR: Table already exists: t01! 
> For usage try 'help "create"' 
> Took 0.1210 seconds {code}
>  
> Reason:
> In the CreateTableProcedure, we update this table descriptor into HBase 
> cluster at stage  CREATE_TABLE_WRITE_FS_LAYOUT
>  
> {code:java}
> env.getMasterServices().getTableDescriptors().update(tableDescriptor, true); 
> {code}
>  
> and then, we check if the Region Replication Count is legal at stage 
> CREATE_TABLE_ADD_TO_META.
>  
>  
> {code:java}
> newRegions = addTableToMeta(env, tableDescriptor, newRegions);
> // MutableRegionInfo.checkReplicaId 
> private static int checkReplicaId(int regionId) {     
>   if (regionId > MAX_REPLICA_ID) {         
> throw new IllegalArgumentException("ReplicaId cannot be greater than" + 
>  MAX_REPLICA_ID);    
>}     
> return regionId;
> }{code}
>  
>  
> So, we can not create the same name table by correct way after faling to 
> create table with many region replications (exceed 65536), because the table 
> descriptor has been updated into cluster and there is no rollback.
> So i think we can check if the region replication count at stage 
> CREATE_TABLE_PRE_OPERATION to avoid this problem



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28457) Introduce a version field in file based tracker record

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835037#comment-17835037
 ] 

Hudson commented on HBASE-28457:


Results for branch branch-2.5
[build #506 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/506/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce a version field in file based tracker record
> --
>
> Key: HBASE-28457
> URL: https://issues.apache.org/jira/browse/HBASE-28457
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> Per the discussion around HBASE-27826 and the related design doc, we all 
> agree that we should add version field to store file tracker, so when 
> downgrading, we will know that we will miss something when reading a tracker 
> file with higher version and fail the initialization, instead of ignore it 
> silently and may cause data loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28481) Prompting table already exists after failing to create table with many region replications

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835033#comment-17835033
 ] 

Hudson commented on HBASE-28481:


Results for branch master
[build #1047 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Prompting table already exists after failing to create table with many region 
> replications
> --
>
> Key: HBASE-28481
> URL: https://issues.apache.org/jira/browse/HBASE-28481
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.13
> Environment: Centos
>Reporter: guluo
>Assignee: guluo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> Reproduction steps:
> {code:java}
> # Create table with 65537 region replications 
> # we would get errors as follow,  this step is no problem 
> hbase:005:0> create 't01', 'info', {REGION_REPLICATION => 65537} 
> ERROR: java.lang.IllegalArgumentException: ReplicaId cannot be greater 
> than65535 
> For usage try 'help "create"' 
> Took 0.7590 seconds{code}
> {code:java}
> # list, and found the table does not exist, as follow 
> hbase:006:0> list TABLE 
> 0 row(s) Took 0.0100 seconds 
> => []{code}
> {code:java}
> # we create this tale agin by the correct way 
> # we would get message that this table already exists 
> hbase:007:0> create 't01', 'info' 
> ERROR: Table already exists: t01! 
> For usage try 'help "create"' 
> Took 0.1210 seconds {code}
>  
> Reason:
> In the CreateTableProcedure, we update this table descriptor into HBase 
> cluster at stage  CREATE_TABLE_WRITE_FS_LAYOUT
>  
> {code:java}
> env.getMasterServices().getTableDescriptors().update(tableDescriptor, true); 
> {code}
>  
> and then, we check if the Region Replication Count is legal at stage 
> CREATE_TABLE_ADD_TO_META.
>  
>  
> {code:java}
> newRegions = addTableToMeta(env, tableDescriptor, newRegions);
> // MutableRegionInfo.checkReplicaId 
> private static int checkReplicaId(int regionId) {     
>   if (regionId > MAX_REPLICA_ID) {         
> throw new IllegalArgumentException("ReplicaId cannot be greater than" + 
>  MAX_REPLICA_ID);    
>}     
> return regionId;
> }{code}
>  
>  
> So, we can not create the same name table by correct way after faling to 
> create table with many region replications (exceed 65536), because the table 
> descriptor has been updated into cluster and there is no rollback.
> So i think we can check if the region replication count at stage 
> CREATE_TABLE_PRE_OPERATION to avoid this problem



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28478) Remove the hbase1 compatible code in FixedFileTrailer

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835034#comment-17835034
 ] 

Hudson commented on HBASE-28478:


Results for branch master
[build #1047 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the hbase1 compatible code in FixedFileTrailer
> -
>
> Key: HBASE-28478
> URL: https://issues.apache.org/jira/browse/HBASE-28478
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0-beta-2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28183) It's impossible to re-enable the quota table if it gets disabled

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835035#comment-17835035
 ] 

Hudson commented on HBASE-28183:


Results for branch master
[build #1047 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1047/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> It's impossible to re-enable the quota table if it gets disabled
> 
>
> Key: HBASE-28183
> URL: https://issues.apache.org/jira/browse/HBASE-28183
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Chandra Sekhar K
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> HMaster.enableTable tries to read the quota table. If you disable the quota 
> table, this fails. So then it's impossible to re-enable it. The only solution 
> I can find is to delete the table at this point, so that it gets recreated at 
> startup, but this results in losing any quotas you had defined.  We should 
> fix enableTable to not check quotas if the table in question is hbase:quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2043324987

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 31s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m  7s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  27m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 53ec2a3919e9 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/testReport/
 |
   | Max. process+thread count | 497 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2043323670

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m  9s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 54s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 47s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 54s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  26m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 28e0cccdc7c6 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2043320760

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  8s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 17s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m  7s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  25m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a88e2daf959f 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/testReport/
 |
   | Max. process+thread count | 473 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2043320252

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  7s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 17s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  25m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c6b7634adf63 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 
13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/testReport/
 |
   | Max. process+thread count | 462 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5605:
URL: https://github.com/apache/hbase/pull/5605#issuecomment-2043319729

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 20s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 281m 16s |  hbase-server in the patch passed.  
|
   |  |   | 309m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5605 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 72e0904e285e 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/testReport/
 |
   | Max. process+thread count | 4858 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-08 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HBASE-28485:

Status: Patch Available  (was: Open)

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation 
> [recommends|https://facebook.github.io/zstd/zstd_manual.html#Chapter4] 
> re-using context objects when possible, because their creation has some 
> expense. They can be more cheaply reset than re-created. In 
> {{ZstdDecompressor}} and {{{}ZstdCompressor{}}}, we create a new context 
> object for every call to {{decompress()}} and {{{}compress(){}}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5605:
URL: https://github.com/apache/hbase/pull/5605#issuecomment-2043233488

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 237m 19s |  hbase-server in the patch passed.  
|
   |  |   | 259m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5605 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7833715f1919 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/testReport/
 |
   | Max. process+thread count | 5294 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28405 - Fix failed procedure rollback when region was not close… [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5799:
URL: https://github.com/apache/hbase/pull/5799#issuecomment-2043228894

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 27s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 259m 40s |  hbase-server in the patch failed.  |
   |  |   | 287m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5799 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 51b8a463b70a 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/testReport/
 |
   | Max. process+thread count | 4625 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5605:
URL: https://github.com/apache/hbase/pull/5605#issuecomment-2043190919

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 19s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 16s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 211m 52s |  hbase-server in the patch passed.  
|
   |  |   | 235m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5605 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux edb3b02ffd13 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/testReport/
 |
   | Max. process+thread count | 4981 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28405 - Fix failed procedure rollback when region was not close… [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5799:
URL: https://github.com/apache/hbase/pull/5799#issuecomment-2043165670

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 48s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 40s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 229m 58s |  hbase-server in the patch failed.  |
   |  |   | 252m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5799 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c78ee2446b6a 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/testReport/
 |
   | Max. process+thread count | 5336 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28405 - Fix failed procedure rollback when region was not close… [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5799:
URL: https://github.com/apache/hbase/pull/5799#issuecomment-2043161214

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  0s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 226m 58s |  hbase-server in the patch failed.  |
   |  |   | 251m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5799 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ab15e4607c8c 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/testReport/
 |
   | Max. process+thread count | 4615 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28447) New configuration to override the hfile specific blocksize

2024-04-08 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-28447:

Fix Version/s: 2.6.0
   2.7.0
   3.0.0-beta-2
   2.5.9

> New configuration to override the hfile specific blocksize
> --
>
> Key: HBASE-28447
> URL: https://issues.apache.org/jira/browse/HBASE-28447
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gourab Taparia
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 2.6.0, 2.7.0, 3.0.0-beta-2, 2.5.9
>
>
> Right now there is no config attached to the HFile block size by which we can 
> override the default. The default is set to 64 KB in 
> HConstants.DEFAULT_BLOCKSIZE . We need a global config property that would go 
> on hbase-site.xm which can control this value.
> Since the BLOCKSIZE is tracked at the column family level - we will need to 
> respect the CFD value first. Also, configuration settings are also something 
> that can be set in schema, at the column or table level, and will override 
> the relevant values from the site file. Below is the precedence order we can 
> use to get the final blocksize value :
> {code:java}
> ColumnFamilyDescriptor.BLOCKSIZE > schema level site configuration overrides 
> > site configuration > HConstants.DEFAULT_BLOCKSIZE{code}
> PS: There is one related config “hbase.mapreduce.hfileoutputformat.blocksize” 
> however that is specific to map-reduce jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28447) New configuration to override the hfile specific blocksize

2024-04-08 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell reassigned HBASE-28447:
---

Assignee: Andrew Kyle Purtell  (was: Gourab Taparia)

> New configuration to override the hfile specific blocksize
> --
>
> Key: HBASE-28447
> URL: https://issues.apache.org/jira/browse/HBASE-28447
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gourab Taparia
>Assignee: Andrew Kyle Purtell
>Priority: Minor
>
> Right now there is no config attached to the HFile block size by which we can 
> override the default. The default is set to 64 KB in 
> HConstants.DEFAULT_BLOCKSIZE . We need a global config property that would go 
> on hbase-site.xm which can control this value.
> Since the BLOCKSIZE is tracked at the column family level - we will need to 
> respect the CFD value first. Also, configuration settings are also something 
> that can be set in schema, at the column or table level, and will override 
> the relevant values from the site file. Below is the precedence order we can 
> use to get the final blocksize value :
> {code:java}
> ColumnFamilyDescriptor.BLOCKSIZE > schema level site configuration overrides 
> > site configuration > HConstants.DEFAULT_BLOCKSIZE{code}
> PS: There is one related config “hbase.mapreduce.hfileoutputformat.blocksize” 
> however that is specific to map-reduce jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28447) New configuration to override the hfile specific blocksize

2024-04-08 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834963#comment-17834963
 ] 

Andrew Kyle Purtell commented on HBASE-28447:
-

(y)

> New configuration to override the hfile specific blocksize
> --
>
> Key: HBASE-28447
> URL: https://issues.apache.org/jira/browse/HBASE-28447
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gourab Taparia
>Assignee: Gourab Taparia
>Priority: Minor
>
> Right now there is no config attached to the HFile block size by which we can 
> override the default. The default is set to 64 KB in 
> HConstants.DEFAULT_BLOCKSIZE . We need a global config property that would go 
> on hbase-site.xm which can control this value.
> Since the BLOCKSIZE is tracked at the column family level - we will need to 
> respect the CFD value first. Also, configuration settings are also something 
> that can be set in schema, at the column or table level, and will override 
> the relevant values from the site file. Below is the precedence order we can 
> use to get the final blocksize value :
> {code:java}
> ColumnFamilyDescriptor.BLOCKSIZE > schema level site configuration overrides 
> > site configuration > HConstants.DEFAULT_BLOCKSIZE{code}
> PS: There is one related config “hbase.mapreduce.hfileoutputformat.blocksize” 
> however that is specific to map-reduce jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


apurtell commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2043090038

   A note for any other reviewers. The difference between failed precommit runs 
and successful ones is 
[d6843d3](https://github.com/apache/hbase/pull/5797/commits/d6843d3958aa6f8b1b03024e29f12e3a9e99df1b)
 . 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-08 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834962#comment-17834962
 ] 

Andrew Kyle Purtell commented on HBASE-28485:
-

[~charlesconnell] thank you.
I approved the PR.

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation 
> [recommends|https://facebook.github.io/zstd/zstd_manual.html#Chapter4] 
> re-using context objects when possible, because their creation has some 
> expense. They can be more cheaply reset than re-created. In 
> {{ZstdDecompressor}} and {{{}ZstdCompressor{}}}, we create a new context 
> object for every call to {{decompress()}} and {{{}compress(){}}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-08 Thread via GitHub


apurtell commented on code in PR #5797:
URL: https://github.com/apache/hbase/pull/5797#discussion_r1556053315


##
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCompressor.java:
##
@@ -170,6 +170,13 @@ public void reset() {
 bytesWritten = 0;
 finish = false;
 finished = false;
+ctx.setLevel(level);
+if (dict != null) {
+  ctx.loadDict(dict);
+} else {
+  // loadDict((byte[]) accepts null to clear the dictionary

Review Comment:
   Did not know this was possible. Nice optimization.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HBASE-28465) Implementation of framework for time-based priority bucket-cache.

2024-04-08 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-28465.
--
Resolution: Fixed

Merged into the [HBASE-28463|https://github.com/apache/hbase/tree/HBASE-28463] 
feature branch.

> Implementation of framework for time-based priority bucket-cache.
> -
>
> Key: HBASE-28465
> URL: https://issues.apache.org/jira/browse/HBASE-28465
> Project: HBase
>  Issue Type: Task
>Reporter: Janardhan Hungund
>Assignee: Vinayak Hegde
>Priority: Major
>  Labels: pull-request-available
>
> In this Jira, we track the implementation of framework for the time-based 
> priority cache.
> This framework would help us to get the required metadata of the HFiles and 
> helps us make the decision about the hotness or coldness of data.
> Thanks,
> Janardhan



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


wchevreuil merged PR #5793:
URL: https://github.com/apache/hbase/pull/5793


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


vinayakphegde commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2043004482

   @wchevreuil, it seems like most of the tests have passed, and any failures 
were due to flaky tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2042968954

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 278m 11s |  hbase-server in the patch failed.  |
   |  |   | 300m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 30591c443986 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 
13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/testReport/
 |
   | Max. process+thread count | 5764 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2042894870

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 56s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 248m 11s |  hbase-server in the patch passed.  
|
   |  |   | 273m 42s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 949ec8740130 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/testReport/
 |
   | Max. process+thread count | 4817 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-28437) Region Server crash in our production environment.

2024-04-08 Thread chaijunjie (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834940#comment-17834940
 ] 

chaijunjie edited comment on HBASE-28437 at 4/8/24 2:22 PM:


It is similar as https://issues.apache.org/jira/browse/HBASE-28060
you can use simple rpc instead netty rpc to avoid it
there maybe some memery object release logic error in netty?


was (Author: JIRAUSER286971):
It is similar as https://issues.apache.org/jira/browse/HBASE-28060

> Region Server crash in our production environment.
> --
>
> Key: HBASE-28437
> URL: https://issues.apache.org/jira/browse/HBASE-28437
> Project: HBase
>  Issue Type: Bug
>Reporter: Rushabh Shah
>Priority: Major
>
> Recently we are seeing lot of RS crash in our production environment creating 
> core dump file and hs_err_pid.log file.
> HBase:  hbase-2.5
> Java: openjdk 1.8
> Copying contents from hs_err_pid.log below:
> {noformat}
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f9fb1415ba2, pid=50172, tid=0x7f92a97ec700
> #
> # JRE version: OpenJDK Runtime Environment (Zulu 8.76.0.18-SA-linux64) 
> (8.0_402-b06) (build 1.8.0_402-b06)
> # Java VM: OpenJDK 64-Bit Server VM (25.402-b06 mixed mode linux-amd64 )
> # Problematic frame:
> # J 19801 C2 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyBufferToStream(Ljava/io/OutputStream;Ljava/nio/ByteBuffer;II)V
>  (75 bytes) @ 0x7f9fb1415ba2 [0x7f9fb14159a0+0x202]
> #
> # Core dump written. Default location: /home/sfdc/core or core.50172
> #
> # If you would like to submit a bug report, please visit:
> #   http://www.azul.com/support/
> #
> ---  T H R E A D  ---
> Current thread (0x7f9fa2d13000):  JavaThread "RS-EventLoopGroup-1-92" 
> daemon [_thread_in_Java, id=54547, 
> stack(0x7f92a96ec000,0x7f92a97ed000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x559869daf000
> Registers:
> RAX=0x7f9dbd8b6460, RBX=0x0008, RCX=0x0005c86b, 
> RDX=0x7f9dbd8b6460
> RSP=0x7f92a97eaf20, RBP=0x0002, RSI=0x7f92d225e970, 
> RDI=0x0069
> R8 =0x55986975f028, R9 =0x0064ffd8, R10=0x005f, 
> R11=0x7f94a778b290
> R12=0x7f9e62855ae8, R13=0x, R14=0x7f9e5a14b1e0, 
> R15=0x7f9fa2d13000
> RIP=0x7f9fb1415ba2, EFLAGS=0x00010216, CSGSFS=0x0033, 
> ERR=0x0004
>   TRAPNO=0x000e
> Top of Stack: (sp=0x7f92a97eaf20)
> 0x7f92a97eaf20:   00690064ff79 7f9dbd8b6460
> 0x7f92a97eaf30:   7f9dbd8b6460 00570003
> 0x7f92a97eaf40:   7f94a778b290 000400010004
> 0x7f92a97eaf50:   0004d090c130 7f9db550
> 0x7f92a97eaf60:   000800040001 7f92a97eaf90
> 0x7f92a97eaf70:   7f92d0908648 0001
> 0x7f92a97eaf80:   0001 005c
> 0x7f92a97eaf90:   7f94ee8078d0 0206
> 0x7f92a97eafa0:   7f9db5545a00 7f9fafb63670
> 0x7f92a97eafb0:   7f9e5a13ed70 00690001
> 0x7f92a97eafc0:   7f93ab8965b8 7f93b9959210
> 0x7f92a97eafd0:   7f9db5545a00 7f9fb04b3e30
> 0x7f92a97eafe0:   7f9e5a13ed70 7f930001
> 0x7f92a97eaff0:   7f93ab8965b8 7f93a8ae3920
> 0x7f92a97eb000:   7f93b9959210 7f94a778b290
> 0x7f92a97eb010:   7f9b60707c20 7f93a8938c28
> 0x7f92a97eb020:   7f94ee8078d0 7f9b60708608
> 0x7f92a97eb030:   7f9b60707bc0 7f9b60707c20
> 0x7f92a97eb040:   0069 7f93ab8965b8
> 0x7f92a97eb050:   7f94a778b290 7f94a778b290
> 0x7f92a97eb060:   0005c80d0005c80c a828a590
> 0x7f92a97eb070:   7f9e5a13ed70 0001270e
> 0x7f92a97eb080:   7f9db5545790 01440022
> 0x7f92a97eb090:   7f95ddc800c0 7f93ab89a6c8
> 0x7f92a97eb0a0:   7f93ae65c270 7f9fb24af990
> 0x7f92a97eb0b0:   7f93ae65c290 7f93ae65c270
> 0x7f92a97eb0c0:   7f9e5a13ed70 7f92ca328528
> 0x7f92a97eb0d0:   7f9e5a13ed98 7f9e5e1e88b0
> 0x7f92a97eb0e0:   7f92ca32d870 7f9e5a13ed98
> 0x7f92a97eb0f0:   7f9e5e1e88b0 7f93b9956288
> 0x7f92a97eb100:   7f9e5a13ed70 7f9fb23c3aac
> 0x7f92a97eb110:   7f9317c9c8d0 7f9b60708608 
> Instructions: (pc=0x7f9fb1415ba2)
> 0x7f9fb1415b82:   44 3b d7 0f 8d 6d fe ff ff 4c 8b 40 10 45 8b ca
> 0x7f9fb1415b92:   44 03 0c 24 c4 c1 f9 7e c3 4d 8b 5b 18 4d 63 c9
> 0x7f9fb1415ba2:   47 0f be 04 08 4d 85 db 0f 84 49 03 00 00 4d 8b
> 0x7f9fb1415bb2:   4b 08 48 b9 10 1c be 10 93 7f 00 00 4c 3b c9 0f 
> Register to memory mapping:
> RAX=0x7f9dbd8b6460 

[jira] [Commented] (HBASE-28437) Region Server crash in our production environment.

2024-04-08 Thread chaijunjie (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834940#comment-17834940
 ] 

chaijunjie commented on HBASE-28437:


It is similar as https://issues.apache.org/jira/browse/HBASE-28060

> Region Server crash in our production environment.
> --
>
> Key: HBASE-28437
> URL: https://issues.apache.org/jira/browse/HBASE-28437
> Project: HBase
>  Issue Type: Bug
>Reporter: Rushabh Shah
>Priority: Major
>
> Recently we are seeing lot of RS crash in our production environment creating 
> core dump file and hs_err_pid.log file.
> HBase:  hbase-2.5
> Java: openjdk 1.8
> Copying contents from hs_err_pid.log below:
> {noformat}
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f9fb1415ba2, pid=50172, tid=0x7f92a97ec700
> #
> # JRE version: OpenJDK Runtime Environment (Zulu 8.76.0.18-SA-linux64) 
> (8.0_402-b06) (build 1.8.0_402-b06)
> # Java VM: OpenJDK 64-Bit Server VM (25.402-b06 mixed mode linux-amd64 )
> # Problematic frame:
> # J 19801 C2 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyBufferToStream(Ljava/io/OutputStream;Ljava/nio/ByteBuffer;II)V
>  (75 bytes) @ 0x7f9fb1415ba2 [0x7f9fb14159a0+0x202]
> #
> # Core dump written. Default location: /home/sfdc/core or core.50172
> #
> # If you would like to submit a bug report, please visit:
> #   http://www.azul.com/support/
> #
> ---  T H R E A D  ---
> Current thread (0x7f9fa2d13000):  JavaThread "RS-EventLoopGroup-1-92" 
> daemon [_thread_in_Java, id=54547, 
> stack(0x7f92a96ec000,0x7f92a97ed000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 
> 0x559869daf000
> Registers:
> RAX=0x7f9dbd8b6460, RBX=0x0008, RCX=0x0005c86b, 
> RDX=0x7f9dbd8b6460
> RSP=0x7f92a97eaf20, RBP=0x0002, RSI=0x7f92d225e970, 
> RDI=0x0069
> R8 =0x55986975f028, R9 =0x0064ffd8, R10=0x005f, 
> R11=0x7f94a778b290
> R12=0x7f9e62855ae8, R13=0x, R14=0x7f9e5a14b1e0, 
> R15=0x7f9fa2d13000
> RIP=0x7f9fb1415ba2, EFLAGS=0x00010216, CSGSFS=0x0033, 
> ERR=0x0004
>   TRAPNO=0x000e
> Top of Stack: (sp=0x7f92a97eaf20)
> 0x7f92a97eaf20:   00690064ff79 7f9dbd8b6460
> 0x7f92a97eaf30:   7f9dbd8b6460 00570003
> 0x7f92a97eaf40:   7f94a778b290 000400010004
> 0x7f92a97eaf50:   0004d090c130 7f9db550
> 0x7f92a97eaf60:   000800040001 7f92a97eaf90
> 0x7f92a97eaf70:   7f92d0908648 0001
> 0x7f92a97eaf80:   0001 005c
> 0x7f92a97eaf90:   7f94ee8078d0 0206
> 0x7f92a97eafa0:   7f9db5545a00 7f9fafb63670
> 0x7f92a97eafb0:   7f9e5a13ed70 00690001
> 0x7f92a97eafc0:   7f93ab8965b8 7f93b9959210
> 0x7f92a97eafd0:   7f9db5545a00 7f9fb04b3e30
> 0x7f92a97eafe0:   7f9e5a13ed70 7f930001
> 0x7f92a97eaff0:   7f93ab8965b8 7f93a8ae3920
> 0x7f92a97eb000:   7f93b9959210 7f94a778b290
> 0x7f92a97eb010:   7f9b60707c20 7f93a8938c28
> 0x7f92a97eb020:   7f94ee8078d0 7f9b60708608
> 0x7f92a97eb030:   7f9b60707bc0 7f9b60707c20
> 0x7f92a97eb040:   0069 7f93ab8965b8
> 0x7f92a97eb050:   7f94a778b290 7f94a778b290
> 0x7f92a97eb060:   0005c80d0005c80c a828a590
> 0x7f92a97eb070:   7f9e5a13ed70 0001270e
> 0x7f92a97eb080:   7f9db5545790 01440022
> 0x7f92a97eb090:   7f95ddc800c0 7f93ab89a6c8
> 0x7f92a97eb0a0:   7f93ae65c270 7f9fb24af990
> 0x7f92a97eb0b0:   7f93ae65c290 7f93ae65c270
> 0x7f92a97eb0c0:   7f9e5a13ed70 7f92ca328528
> 0x7f92a97eb0d0:   7f9e5a13ed98 7f9e5e1e88b0
> 0x7f92a97eb0e0:   7f92ca32d870 7f9e5a13ed98
> 0x7f92a97eb0f0:   7f9e5e1e88b0 7f93b9956288
> 0x7f92a97eb100:   7f9e5a13ed70 7f9fb23c3aac
> 0x7f92a97eb110:   7f9317c9c8d0 7f9b60708608 
> Instructions: (pc=0x7f9fb1415ba2)
> 0x7f9fb1415b82:   44 3b d7 0f 8d 6d fe ff ff 4c 8b 40 10 45 8b ca
> 0x7f9fb1415b92:   44 03 0c 24 c4 c1 f9 7e c3 4d 8b 5b 18 4d 63 c9
> 0x7f9fb1415ba2:   47 0f be 04 08 4d 85 db 0f 84 49 03 00 00 4d 8b
> 0x7f9fb1415bb2:   4b 08 48 b9 10 1c be 10 93 7f 00 00 4c 3b c9 0f 
> Register to memory mapping:
> RAX=0x7f9dbd8b6460 is an oop
> java.nio.DirectByteBuffer 
>  - klass: 'java/nio/DirectByteBuffer'
> RBX=0x0008 is an unknown value
> RCX=0x0005c86b is an unknown value
> RDX=0x7f9dbd8b6460 is an oop
> java.nio.DirectByteBuffer 
>  - klass: 

[jira] [Assigned] (HBASE-28503) Keep entries in draining ZNode when HMaster is configured to reject decommissioned hosts

2024-04-08 Thread Ahmad Alhour (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmad Alhour reassigned HBASE-28503:


Assignee: Ahmad Alhour

> Keep entries in draining ZNode when HMaster is configured to reject 
> decommissioned hosts
> 
>
> Key: HBASE-28503
> URL: https://issues.apache.org/jira/browse/HBASE-28503
> Project: HBase
>  Issue Type: Bug
>Reporter: Ahmad Alhour
>Assignee: Ahmad Alhour
>Priority: Major
>
> Last month we shipped a config feature to allow the HMaster to reject 
> decommissioned hosts (see: 
> [HBASE-28342|https://issues.apache.org/jira/browse/HBASE-28342]). After some 
> testing internally at HubSpot, we discovered that the HMaster loses the entry 
> of a decommissioned host from the draining ZNode when the RegionServer 
> restarts or becomes dead.
> Our proposal for this fix would be to keep entries in the draining ZNode as 
> long as the HMaster is configured to reject decommissioned hosts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28503) Keep entries in draining ZNode when HMaster is configured to reject decommissioned hosts

2024-04-08 Thread Ahmad Alhour (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmad Alhour updated HBASE-28503:
-
Description: 
Last month we shipped a config feature to allow the HMaster to reject 
decommissioned hosts (see: 
[HBASE-28342|https://issues.apache.org/jira/browse/HBASE-28342]). After some 
testing internally at HubSpot, we discovered that the HMaster loses the entry 
of a decommissioned host from the draining ZNode when the RegionServer restarts 
or becomes dead.

Our proposal for this fix would be to keep entries in the draining ZNode as 
long as the HMaster is configured to reject decommissioned hosts.

  was:
Last month we shipped a config feature to allow the HMaster to reject 
decommissioned hosts. After some testing internally at HubSpot, we discovered 
that the HMaster loses the entry of a decommissioned host from the draining 
ZNode when the RegionServer restarts or becomes dead.

Our proposal for this fix would be to keep entries in the draining ZNode as 
long as the HMaster is configured to reject decommissioned hosts.


> Keep entries in draining ZNode when HMaster is configured to reject 
> decommissioned hosts
> 
>
> Key: HBASE-28503
> URL: https://issues.apache.org/jira/browse/HBASE-28503
> Project: HBase
>  Issue Type: Bug
>Reporter: Ahmad Alhour
>Priority: Major
>
> Last month we shipped a config feature to allow the HMaster to reject 
> decommissioned hosts (see: 
> [HBASE-28342|https://issues.apache.org/jira/browse/HBASE-28342]). After some 
> testing internally at HubSpot, we discovered that the HMaster loses the entry 
> of a decommissioned host from the draining ZNode when the RegionServer 
> restarts or becomes dead.
> Our proposal for this fix would be to keep entries in the draining ZNode as 
> long as the HMaster is configured to reject decommissioned hosts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28503) Keep entries in draining ZNode when HMaster is configured to reject decommissioned hosts

2024-04-08 Thread Ahmad Alhour (Jira)
Ahmad Alhour created HBASE-28503:


 Summary: Keep entries in draining ZNode when HMaster is configured 
to reject decommissioned hosts
 Key: HBASE-28503
 URL: https://issues.apache.org/jira/browse/HBASE-28503
 Project: HBase
  Issue Type: Bug
Reporter: Ahmad Alhour


Last month we shipped a config feature to allow the HMaster to reject 
decommissioned hosts. After some testing internally at HubSpot, we discovered 
that the HMaster loses the entry of a decommissioned host from the draining 
ZNode when the RegionServer restarts or becomes dead.

Our proposal for this fix would be to keep entries in the draining ZNode as 
long as the HMaster is configured to reject decommissioned hosts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


jhungund commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1555856302


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java:
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.OptionalLong;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The DataTieringManager class categorizes data into hot data and cold data 
based on the specified
+ * {@link DataTieringType} when DataTiering is enabled. DataTiering is 
disabled by default with
+ * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link 
DataTieringType}
+ * determines the logic for distinguishing data into hot or cold. By default, 
all data is considered
+ * as hot.
+ */
+@InterfaceAudience.Private
+public class DataTieringManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataTieringManager.class);
+  public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type";
+  public static final String DATATIERING_HOT_DATA_AGE_KEY =
+"hbase.hstore.datatiering.hot.age.millis";
+  public static final DataTieringType DEFAULT_DATATIERING = 
DataTieringType.NONE;
+  public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 
* 1000; // 7 Days
+  private static DataTieringManager instance;
+  private final Map onlineRegions;
+
+  private DataTieringManager(Map onlineRegions) {
+this.onlineRegions = onlineRegions;
+  }
+
+  /**
+   * Initializes the DataTieringManager instance with the provided map of 
online regions.
+   * @param onlineRegions A map containing online regions.
+   */
+  public static synchronized void instantiate(Map 
onlineRegions) {
+if (instance == null) {
+  instance = new DataTieringManager(onlineRegions);
+  LOG.info("DataTieringManager instantiated successfully.");
+} else {
+  LOG.warn("DataTieringManager is already instantiated.");
+}
+  }
+
+  /**
+   * Retrieves the instance of DataTieringManager.
+   * @return The instance of DataTieringManager.
+   * @throws IllegalStateException if DataTieringManager has not been 
instantiated.
+   */
+  public static synchronized DataTieringManager getInstance() {
+if (instance == null) {
+  throw new IllegalStateException(
+"DataTieringManager has not been instantiated. Call instantiate() 
first.");
+}
+return instance;
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given block cache key.
+   * @param key the block cache key
+   * @return {@code true} if data tiering is enabled for the HFile associated 
with the key,
+   * {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the HFile 
path or configuration
+   */
+  public boolean isDataTieringEnabled(BlockCacheKey key) throws 
DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isDataTieringEnabled(hFilePath);
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given HFile path.
+   * @param hFilePath the path to the HFile
+   * @return {@code true} if data tiering is enabled, {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the 
configuration
+   */
+  public boolean isDataTieringEnabled(Path hFilePath) throws 
DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+return !dataTieringType.equals(DataTieringType.NONE);
+  }
+
+  /**
+   * Determines whether the data associated with the given block cache key is 
considered hot.

Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


vinayakphegde commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1555847930


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java:
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.OptionalLong;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The DataTieringManager class categorizes data into hot data and cold data 
based on the specified
+ * {@link DataTieringType} when DataTiering is enabled. DataTiering is 
disabled by default with
+ * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link 
DataTieringType}
+ * determines the logic for distinguishing data into hot or cold. By default, 
all data is considered
+ * as hot.
+ */
+@InterfaceAudience.Private
+public class DataTieringManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataTieringManager.class);
+  public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type";
+  public static final String DATATIERING_HOT_DATA_AGE_KEY =
+"hbase.hstore.datatiering.hot.age.millis";
+  public static final DataTieringType DEFAULT_DATATIERING = 
DataTieringType.NONE;
+  public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 
* 1000; // 7 Days
+  private static DataTieringManager instance;
+  private final Map onlineRegions;
+
+  private DataTieringManager(Map onlineRegions) {
+this.onlineRegions = onlineRegions;
+  }
+
+  /**
+   * Initializes the DataTieringManager instance with the provided map of 
online regions.
+   * @param onlineRegions A map containing online regions.
+   */
+  public static synchronized void instantiate(Map 
onlineRegions) {
+if (instance == null) {
+  instance = new DataTieringManager(onlineRegions);
+  LOG.info("DataTieringManager instantiated successfully.");
+} else {
+  LOG.warn("DataTieringManager is already instantiated.");
+}
+  }
+
+  /**
+   * Retrieves the instance of DataTieringManager.
+   * @return The instance of DataTieringManager.
+   * @throws IllegalStateException if DataTieringManager has not been 
instantiated.
+   */
+  public static synchronized DataTieringManager getInstance() {
+if (instance == null) {
+  throw new IllegalStateException(
+"DataTieringManager has not been instantiated. Call instantiate() 
first.");
+}
+return instance;
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given block cache key.
+   * @param key the block cache key
+   * @return {@code true} if data tiering is enabled for the HFile associated 
with the key,
+   * {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the HFile 
path or configuration
+   */
+  public boolean isDataTieringEnabled(BlockCacheKey key) throws 
DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isDataTieringEnabled(hFilePath);
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given HFile path.
+   * @param hFilePath the path to the HFile
+   * @return {@code true} if data tiering is enabled, {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the 
configuration
+   */
+  public boolean isDataTieringEnabled(Path hFilePath) throws 
DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+return !dataTieringType.equals(DataTieringType.NONE);
+  }
+
+  /**
+   * Determines whether the data associated with the given block cache key is 
considered 

Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5605:
URL: https://github.com/apache/hbase/pull/5605#issuecomment-2042745968

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  master passed  |
   | +1 :green_heart: |  spotless  |   1m  8s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   7m 44s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | -1 :x: |  spotless  |   1m  4s |  patch has 63 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 19s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  44m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5605 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 1fd3c9865b8e 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 78 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5605/9/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


jhungund commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1555814598


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java:
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.OptionalLong;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The DataTieringManager class categorizes data into hot data and cold data 
based on the specified
+ * {@link DataTieringType} when DataTiering is enabled. DataTiering is 
disabled by default with
+ * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link 
DataTieringType}
+ * determines the logic for distinguishing data into hot or cold. By default, 
all data is considered
+ * as hot.
+ */
+@InterfaceAudience.Private
+public class DataTieringManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataTieringManager.class);
+  public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type";
+  public static final String DATATIERING_HOT_DATA_AGE_KEY =
+"hbase.hstore.datatiering.hot.age.millis";
+  public static final DataTieringType DEFAULT_DATATIERING = 
DataTieringType.NONE;
+  public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 
* 1000; // 7 Days
+  private static DataTieringManager instance;
+  private final Map onlineRegions;
+
+  private DataTieringManager(Map onlineRegions) {
+this.onlineRegions = onlineRegions;
+  }
+
+  /**
+   * Initializes the DataTieringManager instance with the provided map of 
online regions.
+   * @param onlineRegions A map containing online regions.
+   */
+  public static synchronized void instantiate(Map 
onlineRegions) {
+if (instance == null) {
+  instance = new DataTieringManager(onlineRegions);
+  LOG.info("DataTieringManager instantiated successfully.");
+} else {
+  LOG.warn("DataTieringManager is already instantiated.");
+}
+  }
+
+  /**
+   * Retrieves the instance of DataTieringManager.
+   * @return The instance of DataTieringManager.
+   * @throws IllegalStateException if DataTieringManager has not been 
instantiated.
+   */
+  public static synchronized DataTieringManager getInstance() {
+if (instance == null) {
+  throw new IllegalStateException(
+"DataTieringManager has not been instantiated. Call instantiate() 
first.");
+}
+return instance;
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given block cache key.
+   * @param key the block cache key
+   * @return {@code true} if data tiering is enabled for the HFile associated 
with the key,
+   * {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the HFile 
path or configuration
+   */
+  public boolean isDataTieringEnabled(BlockCacheKey key) throws 
DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isDataTieringEnabled(hFilePath);
+  }
+
+  /**
+   * Determines whether data tiering is enabled for the given HFile path.
+   * @param hFilePath the path to the HFile
+   * @return {@code true} if data tiering is enabled, {@code false} otherwise
+   * @throws DataTieringException if there is an error retrieving the 
configuration
+   */
+  public boolean isDataTieringEnabled(Path hFilePath) throws 
DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+return !dataTieringType.equals(DataTieringType.NONE);
+  }
+
+  /**
+   * Determines whether the data associated with the given block cache key is 
considered hot.

Re: [PR] HBASE-28405 - Fix failed procedure rollback when region was not close… [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5799:
URL: https://github.com/apache/hbase/pull/5799#issuecomment-2042635972

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 55s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 43s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   5m  8s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 40s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  8s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  30m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5799 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 5d1bdfcbc00f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 3340d8dd07 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5799/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-08 Thread Charles Connell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Connell updated HBASE-28485:

Description: The zstd documentation 
[recommends|https://facebook.github.io/zstd/zstd_manual.html#Chapter4] re-using 
context objects when possible, because their creation has some expense. They 
can be more cheaply reset than re-created. In {{ZstdDecompressor}} and 
{{{}ZstdCompressor{}}}, we create a new context object for every call to 
{{decompress()}} and {{{}compress(){}}}. In CPU profiles I've taken at my 
company, the constructor of {{ZstdDecompressCtx}} can sometimes represent 
10-25% of the time spent in zstd decompression, which itself is 5-10% of a 
RegionServer's total CPU time. Avoiding this performance penalty won't lead to 
any massive performance boost, but is a nice little win.  (was: The zstd 
documentation recommends re-using context objects when possible, because their 
creation has some expense. They can be more cheaply reset than re-created. In 
{{ZstdDecompressor}} and {{ZstdCompressor}}, we create a new context object for 
every call to {{decompress()}} and {{compress()}}. In CPU profiles I've taken 
at my company, the constructor of {{ZstdDecompressCtx}} can sometimes represent 
10-25% of the time spent in zstd decompression, which itself is 5-10% of a 
RegionServer's total CPU time. Avoiding this performance penalty won't lead to 
any massive performance boost, but is a nice little win.)

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation 
> [recommends|https://facebook.github.io/zstd/zstd_manual.html#Chapter4] 
> re-using context objects when possible, because their creation has some 
> expense. They can be more cheaply reset than re-created. In 
> {{ZstdDecompressor}} and {{{}ZstdCompressor{}}}, we create a new context 
> object for every call to {{decompress()}} and {{{}compress(){}}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-08 Thread Charles Connell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834904#comment-17834904
 ] 

Charles Connell commented on HBASE-28485:
-

[~apurtell] since you are the author of the code I'm changing here, I want to 
make sure you see this ticket.

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation recommends re-using context objects when possible, 
> because their creation has some expense. They can be more cheaply reset than 
> re-created. In {{ZstdDecompressor}} and {{ZstdCompressor}}, we create a new 
> context object for every call to {{decompress()}} and {{compress()}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28502) Backup manifest of full backup contains incomplete table list

2024-04-08 Thread thomassarens (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

thomassarens updated HBASE-28502:
-
Description: 
Noticed that the {{BackupManifest#getTableNames}} is returning only a single 
table instead of the complete list of tables that were requested for the backup 
in case of a full backup, in case of an incremental backup the manifest table 
list seem complete.

Checking the {{TableBackupClient#addManifest}} method shows why:
 * While looping over the included tables the manifest is stored per table and 
the comment mentions something about storing the manifest with the table 
directory:

{code:java}
// Since we have each table's backup in its own directory structure,
// we'll store its manifest with the table directory.
for (TableName table : backupInfo.getTables()) {
  manifest = new BackupManifest(backupInfo, table);
  ArrayList ancestors = backupManager.getAncestors(backupInfo, 
table);
  for (BackupImage image : ancestors) {
manifest.addDependentImage(image);
  }

  if (type == BackupType.INCREMENTAL) {
// We'll store the log timestamps for this table only in its manifest.
Map> tableTimestampMap = new HashMap<>();
tableTimestampMap.put(table, backupInfo.getIncrTimestampMap().get(table));
manifest.setIncrTimestampMap(tableTimestampMap);
ArrayList ancestorss = backupManager.getAncestors(backupInfo);
for (BackupImage image : ancestorss) {
  manifest.addDependentImage(image);
}
  }
  manifest.store(conf);
}{code}
 * but the manifest path is based on the backup root dir and backup id, so it 
is not on a table directory level: 
{code:java}
Path manifestFilePath = new 
Path(HBackupFileSystem.getBackupPath(backupImage.getRootDir(), 
backupImage.getBackupId()), MANIFEST_FILE_NAME);{code}

 * so each call to {{manifest.store(conf)}} is just overwriting the same 
manifest file
 * for incremental backups the "complete" manifest is stored as well with 
{{manifest.store(conf)}} and using the exact same path, so that explains why it 
is correct for incremental backups:

{code:java}
// For incremental backup, we store a overall manifest in
// /WALs/
// This is used when created the next incremental backup
if (type == BackupType.INCREMENTAL) {
manifest = new BackupManifest(backupInfo);
// set the table region server start and end timestamps for incremental backup
manifest.setIncrTimestampMap(backupInfo.getIncrTimestampMap());
ArrayList ancestors = backupManager.getAncestors(backupInfo);
for (BackupImage image : ancestors) {
manifest.addDependentImage(image);
}
manifest.store(conf);
}{code}
 * the comment related to the manifest path being 
{{/WALs/}} is incorrect

 
I created a simple test that verifies this issue [^TestBackupManifest.java] , 
but no idea how to fix this. Perhaps only the overall manifest file should be 
stored on \{{/ }}level, but that goes against the 
comments here so not sure.

  was:
Noticed that the {{BackupManifest#getTableNames}} is returning only a single 
table instead of the complete list of tables that were requested for the backup 
in case of a full backup, in case of an incremental backup the manifest table 
list seem complete.

Checking the {{TableBackupClient#addManifest}} method shows why:
 * While looping over the included tables the manifest is stored per table and 
the comment mentions something about storing the manifest with the table 
directory:

{code:java}
// Since we have each table's backup in its own directory structure,
// we'll store its manifest with the table directory.
for (TableName table : backupInfo.getTables()) {
  manifest = new BackupManifest(backupInfo, table);
  ArrayList ancestors = backupManager.getAncestors(backupInfo, 
table);
  for (BackupImage image : ancestors) {
manifest.addDependentImage(image);
  }

  if (type == BackupType.INCREMENTAL) {
// We'll store the log timestamps for this table only in its manifest.
Map> tableTimestampMap = new HashMap<>();
tableTimestampMap.put(table, backupInfo.getIncrTimestampMap().get(table));
manifest.setIncrTimestampMap(tableTimestampMap);
ArrayList ancestorss = backupManager.getAncestors(backupInfo);
for (BackupImage image : ancestorss) {
  manifest.addDependentImage(image);
}
  }
  manifest.store(conf);
}{code}
 
 * but the manifest path is based on the backup root dir and backup id, so it 
is not on a table directory level: 
{code:java}
Path manifestFilePath = new 
Path(HBackupFileSystem.getBackupPath(backupImage.getRootDir(), 
backupImage.getBackupId()), MANIFEST_FILE_NAME);{code}

 * so each call to {{manifest.store(conf)}} is just overwriting the same 
manifest file
 * for incremental backups the "complete" manifest is stored as well with 
{{manifest.store(conf)}} and using the exact same path, so that explains why it 
is correct for incremental backups:

{code:java}
// For incremental backup, we store a overall manifest in
// /WALs/

Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


kabhishek4 commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1555630761


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   I agree but this property is not externalised to the user, probably can be 
provided as a workaround in such cases. As mentioned above, I am trying out the 
change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28502) Backup manifest of full backup contains incomplete table list

2024-04-08 Thread thomassarens (Jira)
thomassarens created HBASE-28502:


 Summary: Backup manifest of full backup contains incomplete table 
list
 Key: HBASE-28502
 URL: https://issues.apache.org/jira/browse/HBASE-28502
 Project: HBase
  Issue Type: Bug
  Components: backuprestore
Affects Versions: 2.6.0, 4.0.0-alpha-1
Reporter: thomassarens
 Attachments: TestBackupManifest.java

Noticed that the {{BackupManifest#getTableNames}} is returning only a single 
table instead of the complete list of tables that were requested for the backup 
in case of a full backup, in case of an incremental backup the manifest table 
list seem complete.

Checking the {{TableBackupClient#addManifest}} method shows why:
 * While looping over the included tables the manifest is stored per table and 
the comment mentions something about storing the manifest with the table 
directory:

{code:java}
// Since we have each table's backup in its own directory structure,
// we'll store its manifest with the table directory.
for (TableName table : backupInfo.getTables()) {
  manifest = new BackupManifest(backupInfo, table);
  ArrayList ancestors = backupManager.getAncestors(backupInfo, 
table);
  for (BackupImage image : ancestors) {
manifest.addDependentImage(image);
  }

  if (type == BackupType.INCREMENTAL) {
// We'll store the log timestamps for this table only in its manifest.
Map> tableTimestampMap = new HashMap<>();
tableTimestampMap.put(table, backupInfo.getIncrTimestampMap().get(table));
manifest.setIncrTimestampMap(tableTimestampMap);
ArrayList ancestorss = backupManager.getAncestors(backupInfo);
for (BackupImage image : ancestorss) {
  manifest.addDependentImage(image);
}
  }
  manifest.store(conf);
}{code}
 
 * but the manifest path is based on the backup root dir and backup id, so it 
is not on a table directory level: 
{code:java}
Path manifestFilePath = new 
Path(HBackupFileSystem.getBackupPath(backupImage.getRootDir(), 
backupImage.getBackupId()), MANIFEST_FILE_NAME);{code}

 * so each call to {{manifest.store(conf)}} is just overwriting the same 
manifest file
 * for incremental backups the "complete" manifest is stored as well with 
{{manifest.store(conf)}} and using the exact same path, so that explains why it 
is correct for incremental backups:

{code:java}
// For incremental backup, we store a overall manifest in
// /WALs/
// This is used when created the next incremental backup
if (type == BackupType.INCREMENTAL) {
manifest = new BackupManifest(backupInfo);
// set the table region server start and end timestamps for incremental backup
manifest.setIncrTimestampMap(backupInfo.getIncrTimestampMap());
ArrayList ancestors = backupManager.getAncestors(backupInfo);
for (BackupImage image : ancestors) {
manifest.addDependentImage(image);
}
manifest.store(conf);
}{code}

 * the comment related to the manifest path being 
{{/WALs/}} is incorrect

 
I created a simple test that verifies this issue [^TestBackupManifest.java] , 
but no idea how the fix this. Perhaps only the overall manifest file should be 
stored on {{/ }}level, but that goes against the 
comments here so not sure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


ragarkar commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1555623367


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   I think, this is needed in certain cases where the user is expecting the 
exact delay before prefetch starts. With the delay variation, this delay is not 
fixed anymore which means the prefetch can trigger earlier or later depending 
on the calculated variance. Hence, IMO, this is not only a test only change, 
but it has its own merit. My 2 cents.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


kabhishek4 commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1555616907


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   For our initial requirement, making prefech delay alone dynamic is 
sufficient. 
   
   prefetchDelayVariation can be updated in the loadConfiguration but it will 
be useful for testing only, atleast for now. Trying it out.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


kabhishek4 commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1555616907


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   For our initial requirement, making prefech delay alone dynamic is 
sufficient. 
   
   prefetchDelayVariation can be updated in the loadConfiguration but it will 
be useful for testing only, atleast for now. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


ragarkar commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1555615766


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   This looks like a good idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2042385473

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   2m 38s |  HBASE-28463 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  HBASE-28463 passed  |
   | +1 :green_heart: |  spotless  |   0m 45s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 37s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 39s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 37s |  hbase-server: The patch 
generated 8 new + 4 unchanged - 0 fixed = 12 total (was 4)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   5m 40s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux e517e1c5bde6 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 81 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/4/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-08 Thread via GitHub


wchevreuil commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r167693


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   why not also make it dynamic? Couldn't you just also set it in the 
PrefetchExecutor.loadConfiguration?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


vinayakphegde commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2042266090

   > LTGM, can we just address latest spotless failure?
   
   That's because of the Javadoc in the `TestDataTieringManager` class, where I 
included the structure of the `TestDataTieringManager#hStoreFiles` for better 
code comprehension. What do you think we should do instead?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-08 Thread via GitHub


wchevreuil commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2042252076

   LTGM, can we just address latest spotless failure?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28501) Support non-SPNEGO authentication methods in REST java client library

2024-04-08 Thread Istvan Toth (Jira)
Istvan Toth created HBASE-28501:
---

 Summary: Support non-SPNEGO authentication methods in REST java 
client library
 Key: HBASE-28501
 URL: https://issues.apache.org/jira/browse/HBASE-28501
 Project: HBase
  Issue Type: Improvement
  Components: REST
Reporter: Istvan Toth


The current java client only supports the SPENGO authentication method.

This does not support the case when an application proxy like Apache Knox 
performs AAA conversion from BASIC/DIGEST to kerberos authentication.

Add support for BASIC username/password auth the client.

Generally, the authentication code in the client looks quite backwards, it 
seems that most of the kerberos / auth cookie code duplicates HttpClient 
functionality. AFAICT setting HttpClient up (or letting user set it up) , and 
letting it handle authentication by itself would be a better and more generic 
solution.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28447) New configuration to override the hfile specific blocksize

2024-04-08 Thread Gourab Taparia (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834845#comment-17834845
 ] 

Gourab Taparia commented on HBASE-28447:


[~apurtell] It will be while for me to start on this - Please feel free to pick 
it up

> New configuration to override the hfile specific blocksize
> --
>
> Key: HBASE-28447
> URL: https://issues.apache.org/jira/browse/HBASE-28447
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gourab Taparia
>Assignee: Gourab Taparia
>Priority: Minor
>
> Right now there is no config attached to the HFile block size by which we can 
> override the default. The default is set to 64 KB in 
> HConstants.DEFAULT_BLOCKSIZE . We need a global config property that would go 
> on hbase-site.xm which can control this value.
> Since the BLOCKSIZE is tracked at the column family level - we will need to 
> respect the CFD value first. Also, configuration settings are also something 
> that can be set in schema, at the column or table level, and will override 
> the relevant values from the site file. Below is the precedence order we can 
> use to get the final blocksize value :
> {code:java}
> ColumnFamilyDescriptor.BLOCKSIZE > schema level site configuration overrides 
> > site configuration > HConstants.DEFAULT_BLOCKSIZE{code}
> PS: There is one related config “hbase.mapreduce.hfileoutputformat.blocksize” 
> however that is specific to map-reduce jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28500) Rest Java client library assumes stateless servers

2024-04-08 Thread Istvan Toth (Jira)
Istvan Toth created HBASE-28500:
---

 Summary: Rest Java client library assumes stateless servers
 Key: HBASE-28500
 URL: https://issues.apache.org/jira/browse/HBASE-28500
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Istvan Toth


The Rest Java client library accepts a list of rest servers, and does random 
load balancing between them for each request.
This does not work for scans, which do have state on the rest server instance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-28489) Implement HTTP session support in REST server and client for default auth

2024-04-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834814#comment-17834814
 ] 

Istvan Toth edited comment on HBASE-28489 at 4/8/24 7:37 AM:
-

Which means that the default/BASIC auth cannot be used with HA/LB now. 


was (Author: stoty):
Which means that default/BASIC auth cannot be used with HA/LB now. 

> Implement HTTP session support in REST server and client for default auth
> -
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834814#comment-17834814
 ] 

Istvan Toth commented on HBASE-28489:
-

Which means that default/BASIC auth cannot be used with HA/LB now. 

> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28489) Implement HTTP session support in REST server and client for default auth

2024-04-08 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HBASE-28489:

Summary: Implement HTTP session support in REST server and client for 
default auth  (was: Implement HTTP session support in REST server and client)

> Implement HTTP session support in REST server and client for default auth
> -
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reopened HBASE-28489:
-

My assumption that the REST interface is stateless was incorrect.
Scan objects are maintained on the REST server, so sticky sessions are a must 
for any kind of HA/LB solution.

> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible (though it's not 
> really needed for REST)
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HBASE-28489:

Description: 
The REST server (and java client) currently does not implement sessions.

While is not  necessary for the REST API to work, implementing sessions would 
be a big improvement in throughput and resource usage.

* It would make load balancing with sticky sessions possible
* It would save the overhead of performing authentication for each request

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perfomed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.

  was:
The REST server (and java client) currently does not implement sessions.

While is not  necessary for the REST API to work, implementing sessions would 
be a big improvement in throughput and resource usage.

* It would make load balancing with sticky sessions possible (though it's not 
really needed for REST)
* It would save the overhead of performing authentication for each request

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perfomed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.


> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28499) Use the latest Httpclient/Httpcore 5.x in HBase

2024-04-08 Thread Istvan Toth (Jira)
Istvan Toth created HBASE-28499:
---

 Summary: Use the latest Httpclient/Httpcore 5.x  in HBase
 Key: HBASE-28499
 URL: https://issues.apache.org/jira/browse/HBASE-28499
 Project: HBase
  Issue Type: Improvement
  Components: REST
Reporter: Istvan Toth


HttpClient 4.x is not actively maintained.

We use Httpclient directly in the REST client code, and in the tests for 
several modules.

Httpclient 4.5 is a transitive dependency at least from Hadoop and Thrift, but 
httpclient 5.x uses a separate java package, so 4.5 and 5.x  should be able to 
co-exist fine.

As of now, Httpclient 4.5 is in maintenance mode:
https://hc.apache.org/status.html




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28183) It's impossible to re-enable the quota table if it gets disabled

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834797#comment-17834797
 ] 

Hudson commented on HBASE-28183:


Results for branch branch-2
[build #1027 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> It's impossible to re-enable the quota table if it gets disabled
> 
>
> Key: HBASE-28183
> URL: https://issues.apache.org/jira/browse/HBASE-28183
> Project: HBase
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Chandra Sekhar K
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9
>
>
> HMaster.enableTable tries to read the quota table. If you disable the quota 
> table, this fails. So then it's impossible to re-enable it. The only solution 
> I can find is to delete the table at this point, so that it gets recreated at 
> startup, but this results in losing any quotas you had defined.  We should 
> fix enableTable to not check quotas if the table in question is hbase:quota.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28481) Prompting table already exists after failing to create table with many region replications

2024-04-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834796#comment-17834796
 ] 

Hudson commented on HBASE-28481:


Results for branch branch-2
[build #1027 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1027/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Prompting table already exists after failing to create table with many region 
> replications
> --
>
> Key: HBASE-28481
> URL: https://issues.apache.org/jira/browse/HBASE-28481
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.13
> Environment: Centos
>Reporter: guluo
>Assignee: guluo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> Reproduction steps:
> {code:java}
> # Create table with 65537 region replications 
> # we would get errors as follow,  this step is no problem 
> hbase:005:0> create 't01', 'info', {REGION_REPLICATION => 65537} 
> ERROR: java.lang.IllegalArgumentException: ReplicaId cannot be greater 
> than65535 
> For usage try 'help "create"' 
> Took 0.7590 seconds{code}
> {code:java}
> # list, and found the table does not exist, as follow 
> hbase:006:0> list TABLE 
> 0 row(s) Took 0.0100 seconds 
> => []{code}
> {code:java}
> # we create this tale agin by the correct way 
> # we would get message that this table already exists 
> hbase:007:0> create 't01', 'info' 
> ERROR: Table already exists: t01! 
> For usage try 'help "create"' 
> Took 0.1210 seconds {code}
>  
> Reason:
> In the CreateTableProcedure, we update this table descriptor into HBase 
> cluster at stage  CREATE_TABLE_WRITE_FS_LAYOUT
>  
> {code:java}
> env.getMasterServices().getTableDescriptors().update(tableDescriptor, true); 
> {code}
>  
> and then, we check if the Region Replication Count is legal at stage 
> CREATE_TABLE_ADD_TO_META.
>  
>  
> {code:java}
> newRegions = addTableToMeta(env, tableDescriptor, newRegions);
> // MutableRegionInfo.checkReplicaId 
> private static int checkReplicaId(int regionId) {     
>   if (regionId > MAX_REPLICA_ID) {         
> throw new IllegalArgumentException("ReplicaId cannot be greater than" + 
>  MAX_REPLICA_ID);    
>}     
> return regionId;
> }{code}
>  
>  
> So, we can not create the same name table by correct way after faling to 
> create table with many region replications (exceed 65536), because the table 
> descriptor has been updated into cluster and there is no rollback.
> So i think we can check if the region replication count at stage 
> CREATE_TABLE_PRE_OPERATION to avoid this problem



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved HBASE-28489.
-
Resolution: Invalid

Nothing to do, all relevant cases work already.

> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible (though it's not 
> really needed for REST)
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HBASE-28489:

Description: 
The REST server (and java client) currently does not implement sessions.

While is not  necessary for the REST API to work, implementing sessions would 
be a big improvement in throughput and resource usage.

* It would make load balancing with sticky sessions possible (though it's not 
really needed for REST)
* It would save the overhead of performing authentication for each request

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perfomed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.

  was:
The REST server (and java client) currently does not implement sessions.

While is not  necessary for the REST API to work, implementing sessions would 
be a big improvement in throughput and resource usage.

* It would make load balancing with sticky sessions possible
* It would save the overhead of performing authentication for each request

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perfomed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.


> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible (though it's not 
> really needed for REST)
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834794#comment-17834794
 ] 

Istvan Toth commented on HBASE-28489:
-

This works out of the box for SPNEGO.
It doesn't work for BASIC/simple.

The Knox BASIC->Kerberos auth translation case should also be good, as Knox 
authenticates itself using SPENGO, and is expected to forward the cookie to the 
client (the same works for Avatica).

The only case where a cookie is not sent is when the authentication type is 
undefined.

We COULD define a handler for that, and set the cookie, but I cannot think of a 
use case where that would be needed.

> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)