Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]

2024-04-05 Thread via GitHub


Apache9 commented on PR #5770:
URL: https://github.com/apache/hbase/pull/5770#issuecomment-2040980945

   Let me conclude a bit.
   
   Finally we focus on the scheme part of the URI.
   
   @ndimiduk thought the second part should be communication protocol when 
connecting to hbase cluster, so use `zk` or `rpc` is a bit strange as this is 
just how we get the connection registry.
   While others thought it is OK to make the second part the connection 
registry type, as  the scheme part is to tell others how to parse the other 
parts of the URI, so use a query param to specify the connection registry seems 
incorrect.
   
   For me I prefer we use `hbase+zk` and `hbase+rpc`, especially that phoenix 
has already use something like this, aligning these two projects is good for 
our users I think.
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]

2024-04-05 Thread via GitHub


Apache9 commented on code in PR #5770:
URL: https://github.com/apache/hbase/pull/5770#discussion_r1554530171


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcConnectionRegistryCreator.java:
##
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.net.URI;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Connection registry creator implementation for creating {@link 
RpcConnectionRegistry}.
+ */
+@InterfaceAudience.Private
+public class RpcConnectionRegistryCreator implements ConnectionRegistryCreator 
{
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RpcConnectionRegistryCreator.class);
+
+  @Override
+  public ConnectionRegistry create(URI uri, Configuration conf, User user) 
throws IOException {
+assert protocol().equals(uri.getScheme());
+LOG.debug("connect to hbase cluster with rpc bootstrap servers='{}'", 
uri.getAuthority());
+Configuration c = new Configuration(conf);
+c.set(RpcConnectionRegistry.BOOTSTRAP_NODES, uri.getAuthority());
+return new RpcConnectionRegistry(c, user);
+  }
+
+  @Override
+  public String protocol() {
+return "hbase+rpc";

Review Comment:
   > One other issue we run into with Phoenix quite bit is that the cluster 
configurations still have to match in several aspects, like timeouts, TLS/SASL 
settings, etc, otherwise the client either can not even connect, or experiences 
errors due timeout / buffer size etc mismathces.
   > 
   > I think that some of that may also be a problem when configuring 
replication.
   
   For replication in hbase, there is configuration map in peer configuration, 
so we could add configurations specific for connecting to the peer cluster.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28366) Mis-order of SCP and regionServerReport results into region inconsistencies

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834487#comment-17834487
 ] 

Hudson commented on HBASE-28366:


Results for branch branch-2.4
[build #715 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/715/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/715/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/715/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/715/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/715/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mis-order of SCP and regionServerReport results into region inconsistencies
> ---
>
> Key: HBASE-28366
> URL: https://issues.apache.org/jira/browse/HBASE-28366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> If the regionserver is online but due to network issue, if it's rs ephemeral 
> node gets deleted in zookeeper, active master schedules the SCP. However, if 
> the regionserver is alive, it can still send regionServerReport to active 
> master. In the case where SCP assigns regions to other regionserver that were 
> previously hosted on the old regionserver (which is still alive), the old rs 
> can continue to sent regionServerReport to active master.
> Eventually this results into region inconsistencies because region is alive 
> on two regionservers at the same time (though it's temporary state because 
> the rs will be aborted soon). While old regionserver can have zookeeper 
> connectivity issues, it can still make rpc calls to active master.
> Logs:
> SCP:
> {code:java}
> 2024-01-29 16:50:33,956 INFO [RegionServerTracker-0] 
> assignment.AssignmentManager - Scheduled ServerCrashProcedure pid=9812440 for 
> server1-114.xyz,61020,1706541866103 (carryingMeta=false) 
> server1-114.xyz,61020,1706541866103/CRASHED/regionCount=364/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5d5fc31[Write
>  locks = 1, Read locks = 0], oldState=ONLINE.
> 2024-01-29 16:50:33,956 DEBUG [RegionServerTracker-0] 
> procedure2.ProcedureExecutor - Stored pid=9812440, 
> state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure 
> server1-114.xyz,61020,1706541866103, splitWal=true, meta=false
> 2024-01-29 16:50:33,973 INFO [PEWorker-36] procedure.ServerCrashProcedure - 
> Splitting WALs pid=9812440, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, 
> locked=true; ServerCrashProcedure server1-114.xyz,61020,1706541866103, 
> splitWal=true, meta=false, isMeta: false
>  {code}
> As part of SCP, d743ace9f70d55f55ba1ecc6dc49a5cb was assigned to another 
> server:
>  
> {code:java}
> 2024-01-29 16:50:42,656 INFO [PEWorker-24] procedure.MasterProcedureScheduler 
> - Took xlock for pid=9818494, ppid=9812440, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; 
> TransitRegionStateProcedure 
> table=PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA, 
> region=d743ace9f70d55f55ba1ecc6dc49a5cb, ASSIGN
> 2024-01-29 16:50:43,106 INFO [PEWorker-23] assignment.RegionStateStore - 
> pid=9818494 updating hbase:meta row=d743ace9f70d55f55ba1ecc6dc49a5cb, 
> regionState=OPEN, repBarrier=12867482, openSeqNum=12867482, 
> regionLocation=server1-65.xyz,61020,1706165574050
>  {code}
>  
> rs abort, after ~5 min:
> {code:java}
> 2024-01-29 16:54:27,235 ERROR [regionserver/server1-114:61020] 
> regionserver.HRegionServer - * ABORTING region server 
> server1-114.xyz,61020,1706541866103: Unexpected exception handling getData 
> *
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/master
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>     at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1229)
>     at 
> 

[jira] [Commented] (HBASE-28458) BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834486#comment-17834486
 ] 

Hudson commented on HBASE-28458:


Results for branch branch-2.6
[build #89 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully 
> cached
> ---
>
> Key: HBASE-28458
> URL: https://issues.apache.org/jira/browse/HBASE-28458
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0, 4.0.0-alpha-1, 2.7.0
>
>
> Noticed that 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning was 
> flakey, failing whenever the block eviction happened while prefetch was still 
> ongoing.
> In the test, we pass an instance of BucketCache directly to the cache config, 
> so the test is actually placing both data and meta blocks in the bucket 
> cache. So sometimes, the test call BucketCache.notifyFileCachingCompleted 
> after the it has already evicted two blocks.  
> Inside BucketCache.notifyFileCachingCompleted, we iterate through the 
> backingMap entry set, counting number of blocks for the given file. Then, to 
> consider whether the file is fully cached or not, we do the following 
> validation:
> {noformat}
> if (dataBlockCount == count.getValue() || totalBlockCount == 
> count.getValue()) {
>   LOG.debug("File {} has now been fully cached.", fileName);
>   fileCacheCompleted(fileName, size);
> }  {noformat}
> But the test generates 57 total blocks, 55 data and 2 meta blocks. It evicts 
> two blocks and asserts that the file hasn't been considered fully cached. 
> When these evictions happen while prefetch is still going, we'll pass that 
> check, as the the number of blocks for the file in the backingMap would still 
> be 55, which is what we pass as dataBlockCount.
> As BucketCache is intended for storing data blocks only, I believe we should 
> make sure BucketCache.notifyFileCachingCompleted only accounts for data 
> blocks. Also, the 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning should 
> be updated to consistently reproduce the eviction concurrent to the prefetch. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28366) Mis-order of SCP and regionServerReport results into region inconsistencies

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834485#comment-17834485
 ] 

Hudson commented on HBASE-28366:


Results for branch branch-2.6
[build #89 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/89/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mis-order of SCP and regionServerReport results into region inconsistencies
> ---
>
> Key: HBASE-28366
> URL: https://issues.apache.org/jira/browse/HBASE-28366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> If the regionserver is online but due to network issue, if it's rs ephemeral 
> node gets deleted in zookeeper, active master schedules the SCP. However, if 
> the regionserver is alive, it can still send regionServerReport to active 
> master. In the case where SCP assigns regions to other regionserver that were 
> previously hosted on the old regionserver (which is still alive), the old rs 
> can continue to sent regionServerReport to active master.
> Eventually this results into region inconsistencies because region is alive 
> on two regionservers at the same time (though it's temporary state because 
> the rs will be aborted soon). While old regionserver can have zookeeper 
> connectivity issues, it can still make rpc calls to active master.
> Logs:
> SCP:
> {code:java}
> 2024-01-29 16:50:33,956 INFO [RegionServerTracker-0] 
> assignment.AssignmentManager - Scheduled ServerCrashProcedure pid=9812440 for 
> server1-114.xyz,61020,1706541866103 (carryingMeta=false) 
> server1-114.xyz,61020,1706541866103/CRASHED/regionCount=364/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5d5fc31[Write
>  locks = 1, Read locks = 0], oldState=ONLINE.
> 2024-01-29 16:50:33,956 DEBUG [RegionServerTracker-0] 
> procedure2.ProcedureExecutor - Stored pid=9812440, 
> state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure 
> server1-114.xyz,61020,1706541866103, splitWal=true, meta=false
> 2024-01-29 16:50:33,973 INFO [PEWorker-36] procedure.ServerCrashProcedure - 
> Splitting WALs pid=9812440, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, 
> locked=true; ServerCrashProcedure server1-114.xyz,61020,1706541866103, 
> splitWal=true, meta=false, isMeta: false
>  {code}
> As part of SCP, d743ace9f70d55f55ba1ecc6dc49a5cb was assigned to another 
> server:
>  
> {code:java}
> 2024-01-29 16:50:42,656 INFO [PEWorker-24] procedure.MasterProcedureScheduler 
> - Took xlock for pid=9818494, ppid=9812440, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; 
> TransitRegionStateProcedure 
> table=PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA, 
> region=d743ace9f70d55f55ba1ecc6dc49a5cb, ASSIGN
> 2024-01-29 16:50:43,106 INFO [PEWorker-23] assignment.RegionStateStore - 
> pid=9818494 updating hbase:meta row=d743ace9f70d55f55ba1ecc6dc49a5cb, 
> regionState=OPEN, repBarrier=12867482, openSeqNum=12867482, 
> regionLocation=server1-65.xyz,61020,1706165574050
>  {code}
>  
> rs abort, after ~5 min:
> {code:java}
> 2024-01-29 16:54:27,235 ERROR [regionserver/server1-114:61020] 
> regionserver.HRegionServer - * ABORTING region server 
> server1-114.xyz,61020,1706541866103: Unexpected exception handling getData 
> *
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/master
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>     at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1229)
>     at 
> 

Re: [PR] HBASE-28492 [hbase-thirdparty] Bump dependency versions before releasing [hbase-thirdparty]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #115:
URL: https://github.com/apache/hbase-thirdparty/pull/115#issuecomment-2040901500

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 17s |  root in master failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in master failed.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in master failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 17s |  root in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  0s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 17s |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  ASF License check generated no 
output?  |
   |  |   |   3m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/115 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile |
   | uname | Linux a5767681b302 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 726f60d |
   | Default Java | Oracle Corporation-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/branch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/branch-javadoc-root.txt
 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/patch-javadoc-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/artifact/yetus-precommit-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/testReport/
 |
   | Max. process+thread count | 9 (vs. ulimit of 1000) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/2/console 
|
   | versions | git=2.20.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28491 Bump netty to 4.1.108.Final for addressing CVE-2024-29025 [hbase-thirdparty]

2024-04-05 Thread via GitHub


Apache9 commented on PR #114:
URL: https://github.com/apache/hbase-thirdparty/pull/114#issuecomment-2040900449

   Seems something wrong with jenkins...
   
   All mvn related commands failed without any output...
   
   Will dig later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28492 [hbase-thirdparty] Bump dependency versions before releasing [hbase-thirdparty]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #115:
URL: https://github.com/apache/hbase-thirdparty/pull/115#issuecomment-2040896396

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 17s |  root in master failed.  |
   | -1 :x: |  compile  |   0m 18s |  root in master failed.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in master failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 18s |  root in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 18s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 18s |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  ASF License check generated no 
output?  |
   |  |   |   4m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/115 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile |
   | uname | Linux 07fe80665c68 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 726f60d |
   | Default Java | Oracle Corporation-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/branch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/branch-javadoc-root.txt
 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/patch-javadoc-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/artifact/yetus-precommit-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/testReport/
 |
   | Max. process+thread count | 27 (vs. ulimit of 1000) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-115/1/console 
|
   | versions | git=2.20.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28491 Bump netty to 4.1.108.Final for addressing CVE-2024-29025 [hbase-thirdparty]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #114:
URL: https://github.com/apache/hbase-thirdparty/pull/114#issuecomment-2040892694

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s |  root in master failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in master failed.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in master failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 17s |  root in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  0s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 18s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 17s |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  ASF License check generated no 
output?  |
   |  |   |   3m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/114 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile |
   | uname | Linux 142ae647d488 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 726f60d |
   | Default Java | Oracle Corporation-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/branch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/branch-javadoc-root.txt
 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/patch-javadoc-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/artifact/yetus-precommit-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/testReport/
 |
   | Max. process+thread count | 9 (vs. ulimit of 1000) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/2/console 
|
   | versions | git=2.20.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5798:
URL: https://github.com/apache/hbase/pull/5798#issuecomment-2040825283

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 48s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.6 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  branch-2.6 passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  branch-2.6 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 47s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  branch-2.6 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 45s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 329m 57s |  root in the patch failed.  |
   |  |   | 357m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5798 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux df413ff29e1e 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.6 / a56126c276 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/testReport/
 |
   | Max. process+thread count | 4874 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040824093

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 45s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 248m 38s |  hbase-server in the patch passed.  
|
   |  |   | 271m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ef1ff8a4a597 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 
13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/testReport/
 |
   | Max. process+thread count | 5053 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040820860

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 240m 27s |  hbase-server in the patch passed.  
|
   |  |   | 263m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a8b35c1b787d 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/testReport/
 |
   | Max. process+thread count | 4671 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5798:
URL: https://github.com/apache/hbase/pull/5798#issuecomment-2040733859

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.6 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  1s |  branch-2.6 passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  branch-2.6 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 41s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  branch-2.6 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 228m  6s |  root in the patch failed.  |
   |  |   | 258m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5798 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux aa3bda206899 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.6 / a56126c276 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/testReport/
 |
   | Max. process+thread count | 5025 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28447) New configuration to override the hfile specific blocksize

2024-04-05 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834450#comment-17834450
 ] 

Andrew Kyle Purtell commented on HBASE-28447:
-

[~gourab.taparia] Are you planning to open a PR for this? 

> New configuration to override the hfile specific blocksize
> --
>
> Key: HBASE-28447
> URL: https://issues.apache.org/jira/browse/HBASE-28447
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gourab Taparia
>Assignee: Gourab Taparia
>Priority: Minor
>
> Right now there is no config attached to the HFile block size by which we can 
> override the default. The default is set to 64 KB in 
> HConstants.DEFAULT_BLOCKSIZE . We need a global config property that would go 
> on hbase-site.xm which can control this value.
> Since the BLOCKSIZE is tracked at the column family level - we will need to 
> respect the CFD value first. Also, configuration settings are also something 
> that can be set in schema, at the column or table level, and will override 
> the relevant values from the site file. Below is the precedence order we can 
> use to get the final blocksize value :
> {code:java}
> ColumnFamilyDescriptor.BLOCKSIZE > schema level site configuration overrides 
> > site configuration > HConstants.DEFAULT_BLOCKSIZE{code}
> PS: There is one related config “hbase.mapreduce.hfileoutputformat.blocksize” 
> however that is specific to map-reduce jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28184) Tailing the WAL is very slow if there are multiple peers.

2024-04-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HBASE-28184:
---
Labels: pull-request-available  (was: )

> Tailing the WAL is very slow if there are multiple peers.
> -
>
> Key: HBASE-28184
> URL: https://issues.apache.org/jira/browse/HBASE-28184
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7
>
>
> Noticed in one of our production clusters which has 4 peers.
> Due to sudden ingestion of data, the size of log queue increased to a peak of 
> 506. We have configured log roll size to 256 MB. Most of the edits in the WAL 
> were from a table for which replication is disabled. 
> So all ReplicationSourceWALReader thread had to do was to replay the WAL and 
> NOT replicate them. Still it took 12 hours to drain the queue.
> Took few jstacks and found that ReplicationSourceWALReader was waiting to 
> acquire rollWriterLock 
> [here|https://github.com/apache/hbase/blob/branch-2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java#L1231]
> {noformat}
> "regionserver/,1" #1036 daemon prio=5 os_prio=0 tid=0x7f44b374e800 
> nid=0xbd7f waiting on condition [0x7f37b4d19000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x7f3897a3e150> (a 
> java.util.concurrent.locks.ReentrantLock$FairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:837)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:872)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1202)
> at 
> java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:228)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.getLogFileSizeIfBeingWritten(AbstractFSWAL.java:1102)
> at 
> org.apache.hadoop.hbase.wal.WALProvider.lambda$null$0(WALProvider.java:128)
> at 
> org.apache.hadoop.hbase.wal.WALProvider$$Lambda$177/1119730685.apply(Unknown 
> Source)
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at 
> java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1361)
> at 
> java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
> at 
> java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> at 
> java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.findAny(ReferencePipeline.java:536)
> at 
> org.apache.hadoop.hbase.wal.WALProvider.lambda$getWALFileLengthProvider$2(WALProvider.java:129)
> at 
> org.apache.hadoop.hbase.wal.WALProvider$$Lambda$140/1246380717.getLogFileSizeIfBeingWritten(Unknown
>  Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:260)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:172)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:101)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:222)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157)
> {noformat}
>  All the peers will contend for this lock during every batch read.
> Look at the code snippet below. We are guarding this section with 
> rollWriterLock if we are replicating the active WAL file. But in our case we 
> are NOT replicating active WAL file but still we acquire this lock only to 
> return OptionalLong.empty();
> {noformat}
>   /**
>* if the given {@code path} is being written currently, then return its 
> length.
>* 
>* This is used by replication to prevent replicating 

Re: [PR] HBASE-28184 Addendum PR [hbase]

2024-04-05 Thread via GitHub


shahrs87 commented on code in PR #5521:
URL: https://github.com/apache/hbase/pull/5521#discussion_r1554314183


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java:
##
@@ -259,10 +259,11 @@ private boolean readNextEntryAndRecordReaderPosition() 
throws IOException {
 Entry readEntry = reader.next();
 long readerPos = reader.getPosition();
 OptionalLong fileLength;
-if (logQueue.getQueueSize(walGroupId) > 1) {
+if (logQueue.getQueueSize(walGroupId) > 2) {

Review Comment:
   @Apache9  Can you please take a look in my prev comment? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28366) Mis-order of SCP and regionServerReport results into region inconsistencies

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834440#comment-17834440
 ] 

Hudson commented on HBASE-28366:


Results for branch branch-3
[build #178 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/178/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/178/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/178/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/178/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mis-order of SCP and regionServerReport results into region inconsistencies
> ---
>
> Key: HBASE-28366
> URL: https://issues.apache.org/jira/browse/HBASE-28366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> If the regionserver is online but due to network issue, if it's rs ephemeral 
> node gets deleted in zookeeper, active master schedules the SCP. However, if 
> the regionserver is alive, it can still send regionServerReport to active 
> master. In the case where SCP assigns regions to other regionserver that were 
> previously hosted on the old regionserver (which is still alive), the old rs 
> can continue to sent regionServerReport to active master.
> Eventually this results into region inconsistencies because region is alive 
> on two regionservers at the same time (though it's temporary state because 
> the rs will be aborted soon). While old regionserver can have zookeeper 
> connectivity issues, it can still make rpc calls to active master.
> Logs:
> SCP:
> {code:java}
> 2024-01-29 16:50:33,956 INFO [RegionServerTracker-0] 
> assignment.AssignmentManager - Scheduled ServerCrashProcedure pid=9812440 for 
> server1-114.xyz,61020,1706541866103 (carryingMeta=false) 
> server1-114.xyz,61020,1706541866103/CRASHED/regionCount=364/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5d5fc31[Write
>  locks = 1, Read locks = 0], oldState=ONLINE.
> 2024-01-29 16:50:33,956 DEBUG [RegionServerTracker-0] 
> procedure2.ProcedureExecutor - Stored pid=9812440, 
> state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure 
> server1-114.xyz,61020,1706541866103, splitWal=true, meta=false
> 2024-01-29 16:50:33,973 INFO [PEWorker-36] procedure.ServerCrashProcedure - 
> Splitting WALs pid=9812440, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, 
> locked=true; ServerCrashProcedure server1-114.xyz,61020,1706541866103, 
> splitWal=true, meta=false, isMeta: false
>  {code}
> As part of SCP, d743ace9f70d55f55ba1ecc6dc49a5cb was assigned to another 
> server:
>  
> {code:java}
> 2024-01-29 16:50:42,656 INFO [PEWorker-24] procedure.MasterProcedureScheduler 
> - Took xlock for pid=9818494, ppid=9812440, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; 
> TransitRegionStateProcedure 
> table=PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA, 
> region=d743ace9f70d55f55ba1ecc6dc49a5cb, ASSIGN
> 2024-01-29 16:50:43,106 INFO [PEWorker-23] assignment.RegionStateStore - 
> pid=9818494 updating hbase:meta row=d743ace9f70d55f55ba1ecc6dc49a5cb, 
> regionState=OPEN, repBarrier=12867482, openSeqNum=12867482, 
> regionLocation=server1-65.xyz,61020,1706165574050
>  {code}
>  
> rs abort, after ~5 min:
> {code:java}
> 2024-01-29 16:54:27,235 ERROR [regionserver/server1-114:61020] 
> regionserver.HRegionServer - * ABORTING region server 
> server1-114.xyz,61020,1706541866103: Unexpected exception handling getData 
> *
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/master
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>     at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1229)
>     at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:414)
>     at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:403)
>     at 
> 

Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040590074

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   2m 48s |  HBASE-28463 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  HBASE-28463 passed  |
   | +1 :green_heart: |  spotless  |   0m 49s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 44s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 37s |  hbase-server: The patch 
generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 48s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | -1 :x: |  spotless  |   0m 50s |  patch has 46 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  35m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 7f8b257c1b65 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 81 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28366) Mis-order of SCP and regionServerReport results into region inconsistencies

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834429#comment-17834429
 ] 

Hudson commented on HBASE-28366:


Results for branch branch-2.5
[build #505 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/505/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/505/General_20Nightly_20Build_20Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/505/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/505/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/505/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mis-order of SCP and regionServerReport results into region inconsistencies
> ---
>
> Key: HBASE-28366
> URL: https://issues.apache.org/jira/browse/HBASE-28366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> If the regionserver is online but due to network issue, if it's rs ephemeral 
> node gets deleted in zookeeper, active master schedules the SCP. However, if 
> the regionserver is alive, it can still send regionServerReport to active 
> master. In the case where SCP assigns regions to other regionserver that were 
> previously hosted on the old regionserver (which is still alive), the old rs 
> can continue to sent regionServerReport to active master.
> Eventually this results into region inconsistencies because region is alive 
> on two regionservers at the same time (though it's temporary state because 
> the rs will be aborted soon). While old regionserver can have zookeeper 
> connectivity issues, it can still make rpc calls to active master.
> Logs:
> SCP:
> {code:java}
> 2024-01-29 16:50:33,956 INFO [RegionServerTracker-0] 
> assignment.AssignmentManager - Scheduled ServerCrashProcedure pid=9812440 for 
> server1-114.xyz,61020,1706541866103 (carryingMeta=false) 
> server1-114.xyz,61020,1706541866103/CRASHED/regionCount=364/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5d5fc31[Write
>  locks = 1, Read locks = 0], oldState=ONLINE.
> 2024-01-29 16:50:33,956 DEBUG [RegionServerTracker-0] 
> procedure2.ProcedureExecutor - Stored pid=9812440, 
> state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure 
> server1-114.xyz,61020,1706541866103, splitWal=true, meta=false
> 2024-01-29 16:50:33,973 INFO [PEWorker-36] procedure.ServerCrashProcedure - 
> Splitting WALs pid=9812440, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, 
> locked=true; ServerCrashProcedure server1-114.xyz,61020,1706541866103, 
> splitWal=true, meta=false, isMeta: false
>  {code}
> As part of SCP, d743ace9f70d55f55ba1ecc6dc49a5cb was assigned to another 
> server:
>  
> {code:java}
> 2024-01-29 16:50:42,656 INFO [PEWorker-24] procedure.MasterProcedureScheduler 
> - Took xlock for pid=9818494, ppid=9812440, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; 
> TransitRegionStateProcedure 
> table=PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA, 
> region=d743ace9f70d55f55ba1ecc6dc49a5cb, ASSIGN
> 2024-01-29 16:50:43,106 INFO [PEWorker-23] assignment.RegionStateStore - 
> pid=9818494 updating hbase:meta row=d743ace9f70d55f55ba1ecc6dc49a5cb, 
> regionState=OPEN, repBarrier=12867482, openSeqNum=12867482, 
> regionLocation=server1-65.xyz,61020,1706165574050
>  {code}
>  
> rs abort, after ~5 min:
> {code:java}
> 2024-01-29 16:54:27,235 ERROR [regionserver/server1-114:61020] 
> regionserver.HRegionServer - * ABORTING region server 
> server1-114.xyz,61020,1706541866103: Unexpected exception handling getData 
> *
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/master
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>     at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1229)
>     at 
> 

Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2040561463

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 49s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 20s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 53s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 417m  1s |  root in the patch failed.  |
   |  |   | 452m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 27eb633b5de6 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/testReport/
 |
   | Max. process+thread count | 5548 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040537932

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 27s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 292m 22s |  hbase-server in the patch failed.  |
   |  |   | 321m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0331e5823b27 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 
13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/testReport/
 |
   | Max. process+thread count | 4988 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040481765

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  HBASE-28463 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 21s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 251m 27s |  hbase-server in the patch failed.  |
   |  |   | 275m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e0f2e8772541 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 
14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/testReport/
 |
   | Max. process+thread count | 4710 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5798:
URL: https://github.com/apache/hbase/pull/5798#issuecomment-2040481039

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 56s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.6 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  branch-2.6 passed  |
   | +1 :green_heart: |  compile  |   6m  7s |  branch-2.6 passed  |
   | -1 :x: |  spotless  |   0m 12s |  branch has 60 errors when running 
spotless:check, run spotless:apply to fix.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 58s |  the patch passed  |
   | -0 :warning: |  javac  |   5m 58s |  root generated 1 new + 1072 unchanged 
- 1 fixed = 1073 total (was 1073)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 42s |  Patch does not cause any 
errors with Hadoop 2.10.2 or 3.3.6.  |
   | -1 :x: |  spotless  |   0m 10s |  patch has 60 errors when running 
spotless:check, run spotless:apply to fix.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  34m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5798 |
   | Optional Tests | dupname asflicense javac hadoopcheck spotless xml compile 
|
   | uname | Linux 9a619c8455c9 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.6 / a56126c276 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-general-check/output/branch-spotless.txt
 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-general-check/output/diff-compile-javac-root.txt
 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 78 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5798/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28366) Mis-order of SCP and regionServerReport results into region inconsistencies

2024-04-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834418#comment-17834418
 ] 

Hudson commented on HBASE-28366:


Results for branch master
[build #1044 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1044/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1044/General_20Nightly_20Build_20Report/]




(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1044/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1044/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mis-order of SCP and regionServerReport results into region inconsistencies
> ---
>
> Key: HBASE-28366
> URL: https://issues.apache.org/jira/browse/HBASE-28366
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.17, 3.0.0-beta-1, 2.5.7
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9
>
>
> If the regionserver is online but due to network issue, if it's rs ephemeral 
> node gets deleted in zookeeper, active master schedules the SCP. However, if 
> the regionserver is alive, it can still send regionServerReport to active 
> master. In the case where SCP assigns regions to other regionserver that were 
> previously hosted on the old regionserver (which is still alive), the old rs 
> can continue to sent regionServerReport to active master.
> Eventually this results into region inconsistencies because region is alive 
> on two regionservers at the same time (though it's temporary state because 
> the rs will be aborted soon). While old regionserver can have zookeeper 
> connectivity issues, it can still make rpc calls to active master.
> Logs:
> SCP:
> {code:java}
> 2024-01-29 16:50:33,956 INFO [RegionServerTracker-0] 
> assignment.AssignmentManager - Scheduled ServerCrashProcedure pid=9812440 for 
> server1-114.xyz,61020,1706541866103 (carryingMeta=false) 
> server1-114.xyz,61020,1706541866103/CRASHED/regionCount=364/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5d5fc31[Write
>  locks = 1, Read locks = 0], oldState=ONLINE.
> 2024-01-29 16:50:33,956 DEBUG [RegionServerTracker-0] 
> procedure2.ProcedureExecutor - Stored pid=9812440, 
> state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure 
> server1-114.xyz,61020,1706541866103, splitWal=true, meta=false
> 2024-01-29 16:50:33,973 INFO [PEWorker-36] procedure.ServerCrashProcedure - 
> Splitting WALs pid=9812440, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, 
> locked=true; ServerCrashProcedure server1-114.xyz,61020,1706541866103, 
> splitWal=true, meta=false, isMeta: false
>  {code}
> As part of SCP, d743ace9f70d55f55ba1ecc6dc49a5cb was assigned to another 
> server:
>  
> {code:java}
> 2024-01-29 16:50:42,656 INFO [PEWorker-24] procedure.MasterProcedureScheduler 
> - Took xlock for pid=9818494, ppid=9812440, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; 
> TransitRegionStateProcedure 
> table=PLATFORM_ENTITY.PLATFORM_IMMUTABLE_ENTITY_DATA, 
> region=d743ace9f70d55f55ba1ecc6dc49a5cb, ASSIGN
> 2024-01-29 16:50:43,106 INFO [PEWorker-23] assignment.RegionStateStore - 
> pid=9818494 updating hbase:meta row=d743ace9f70d55f55ba1ecc6dc49a5cb, 
> regionState=OPEN, repBarrier=12867482, openSeqNum=12867482, 
> regionLocation=server1-65.xyz,61020,1706165574050
>  {code}
>  
> rs abort, after ~5 min:
> {code:java}
> 2024-01-29 16:54:27,235 ERROR [regionserver/server1-114:61020] 
> regionserver.HRegionServer - * ABORTING region server 
> server1-114.xyz,61020,1706541866103: Unexpected exception handling getData 
> *
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase/master
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
>     at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>     at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1229)
>     at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:414)
>     at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:403)
>     at 
> 

[jira] [Created] (HBASE-28494) "WAL system stuck?" due to deadlock at org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete

2024-04-05 Thread Athish Babu (Jira)
Athish Babu created HBASE-28494:
---

 Summary: "WAL system stuck?" due to deadlock at 
org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete
 Key: HBASE-28494
 URL: https://issues.apache.org/jira/browse/HBASE-28494
 Project: HBase
  Issue Type: Bug
  Components: regionserver, wal
Affects Versions: 2.5.5
 Environment: hbase-2.5.5

hadoop-3.3.6

kerberos authentication enabled.

OS: debian 11
Reporter: Athish Babu
 Attachments: RS_thread_dump.txt

Currently we come across a issue in write handler threads of a regionserver 
during AsyncFSWAL append operation. We could see regionserver's write handler 
threads is going to WAITING State  while acquiring lock for WAL append 
operation at  MultiVersionConcurrencyControl.begin

 
{code:java}
"RpcServer.default.FPRWQ.Fifo.write.handler=7,queue=3,port=16020" #133 daemon 
prio=5 os_prio=0 tid=0x7f9301fe7800 nid=0x329a02 runnable 
[0x7f8a6489a000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338)
    at 
com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136)
    at 
com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105)
    at com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$10(AbstractFSWAL.java:1202)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$631/875615795.run(Unknown
 Source)
    at 
org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:144)
    - locked <0x7f8afa4d1a80> (a java.util.LinkedList)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1201)
    at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:647)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$appendData$14(AbstractFSWAL.java:1255)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$699/1762709833.call(Unknown
 Source)
    at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:216)
    at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1255)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:7800)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4522)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4446)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4368)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)
    at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45008)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
    at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
    at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) {code}
 

Other write handler threads are getting BLOCKED state while waiting for above 
lock to get released.

 
{code:java}
"RpcServer.default.FPRWQ.Fifo.write.handler=38,queue=2,port=16020" #164 daemon 
prio=5 os_prio=0 tid=0x7f9303147800 nid=0x329a21 waiting for monitor entry 
[0x7f8a61586000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at 
org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:179)
    - waiting to lock <0x7f8afa4d1a80> (a java.util.LinkedList)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:7808)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4522)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4446)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4368)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)
    at 

Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040435656

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m  9s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 46s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 56s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 50s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  25m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 19cd7173c7c1 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 77 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040434584

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 57s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 54s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m 20s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  24m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 96ef34edf858 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/testReport/
 |
   | Max. process+thread count | 477 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040433873

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 29s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  24m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 69a320996f50 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/testReport/
 |
   | Max. process+thread count | 464 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040427103

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  5s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 28s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 52s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  22m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 81cbf5ce3f5f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/testReport/
 |
   | Max. process+thread count | 594 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/3/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2040357049

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  5s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 290m  1s |  root in the patch passed.  |
   |  |   | 322m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 21c4bab1a74f 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/testReport/
 |
   | Max. process+thread count | 8857 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-05 Thread Charles Connell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834406#comment-17834406
 ] 

Charles Connell commented on HBASE-28485:
-

>From the [zstd 
>manual|https://facebook.github.io/zstd/zstd_manual.html#Chapter4]:

> When compressing many times, it is recommended to allocate a context just 
> once, and re-use it for each successive compression operation. This will make 
> workload friendlier for system's memory. Note : re-using context is just a 
> speed / resource optimization. It doesn't change the compression ratio, which 
> remains identical. Note 2 : In multi-threaded environments, use one different 
> context per thread for parallel execution.

So this should be a safe change, and not change semantics.

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation recommends re-using context objects when possible, 
> because their creation has some expense. They can be more cheaply reset than 
> re-created. In {{ZstdDecompressor}} and {{ZstdCompressor}}, we create a new 
> context object for every call to {{decompress()}} and {{compress()}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-05 Thread Charles Connell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834406#comment-17834406
 ] 

Charles Connell edited comment on HBASE-28485 at 4/5/24 5:59 PM:
-

>From the [zstd 
>manual|https://facebook.github.io/zstd/zstd_manual.html#Chapter4]:
{quote}When compressing many times, it is recommended to allocate a context 
just once, and re-use it for each successive compression operation. This will 
make workload friendlier for system's memory. Note : re-using context is just a 
speed / resource optimization. It doesn't change the compression ratio, which 
remains identical. Note 2 : In multi-threaded environments, use one different 
context per thread for parallel execution.
{quote}
So this should be a safe change, and not change semantics.


was (Author: charlesconnell):
>From the [zstd 
>manual|https://facebook.github.io/zstd/zstd_manual.html#Chapter4]:

> When compressing many times, it is recommended to allocate a context just 
> once, and re-use it for each successive compression operation. This will make 
> workload friendlier for system's memory. Note : re-using context is just a 
> speed / resource optimization. It doesn't change the compression ratio, which 
> remains identical. Note 2 : In multi-threaded environments, use one different 
> context per thread for parallel execution.

So this should be a safe change, and not change semantics.

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation recommends re-using context objects when possible, 
> because their creation has some expense. They can be more cheaply reset than 
> re-created. In {{ZstdDecompressor}} and {{ZstdCompressor}}, we create a new 
> context object for every call to {{decompress()}} and {{compress()}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2040317353

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  8s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 32s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 29s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 264m  0s |  root in the patch passed.  |
   |  |   | 295m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2f4fead1a528 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/testReport/
 |
   | Max. process+thread count | 8436 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


wchevreuil commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1553980810


##
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDataTieringManager.java:
##
@@ -0,0 +1,388 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.fs.HFileSystem;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheFactory;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.io.hfile.BlockType;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
+import org.apache.hadoop.hbase.testclassification.RegionServerTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+/**
+ * This class is used to test the functionality of the DataTieringManager.
+ *
+ * The mock online regions are stored in {@link 
TestDataTieringManager#testOnlineRegions}.
+ * For all tests, the setup of {@link 
TestDataTieringManager#testOnlineRegions} occurs only once.
+ * Please refer to {@link TestDataTieringManager#setupOnlineRegions()} for the 
structure.
+ * Additionally, a list of all store files is maintained in {@link 
TestDataTieringManager#hStoreFiles}.
+ * The characteristics of these store files are listed below:
+ * ## HStoreFile Information
+ *
+ * | HStoreFile   | Region | Store   | DataTiering 
  | isHot |
+ * 
|--||-|---|---|
+ * | hStoreFile0  | region1| hStore11| TIME_RANGE  
  | true  |
+ * | hStoreFile1  | region1| hStore12| NONE
  | true |
+ * | hStoreFile2  | region2| hStore21| TIME_RANGE  
  | true  |
+ * | hStoreFile3  | region2| hStore22| TIME_RANGE  
  | false |
+ */
+
+@Category({ RegionServerTests.class, SmallTests.class })
+public class TestDataTieringManager {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestDataTieringManager.class);
+
+  private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+  private static Configuration defaultConf;
+  private static FileSystem fs;
+  private static CacheConfig cacheConf;
+  private static Path testDir;
+  private static Map testOnlineRegions;
+
+  private static DataTieringManager dataTieringManager;
+  private static List hStoreFiles;
+
+  @BeforeClass
+  public static void setupBeforeClass() throws Exception {
+testDir = 
TEST_UTIL.getDataTestDir(TestDataTieringManager.class.getSimpleName());
+defaultConf = TEST_UTIL.getConfiguration();
+fs = HFileSystem.get(defaultConf);
+BlockCache blockCache = BlockCacheFactory.createBlockCache(defaultConf);
+cacheConf = new CacheConfig(defaultConf, blockCache);
+

Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


wchevreuil commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1553979182


##
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDataTieringManager.java:
##
@@ -0,0 +1,388 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.fs.HFileSystem;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheFactory;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.io.hfile.BlockType;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
+import org.apache.hadoop.hbase.testclassification.RegionServerTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+/**
+ * This class is used to test the functionality of the DataTieringManager.
+ *
+ * The mock online regions are stored in {@link 
TestDataTieringManager#testOnlineRegions}.
+ * For all tests, the setup of {@link 
TestDataTieringManager#testOnlineRegions} occurs only once.
+ * Please refer to {@link TestDataTieringManager#setupOnlineRegions()} for the 
structure.
+ * Additionally, a list of all store files is maintained in {@link 
TestDataTieringManager#hStoreFiles}.
+ * The characteristics of these store files are listed below:
+ * ## HStoreFile Information
+ *
+ * | HStoreFile   | Region | Store   | DataTiering 
  | isHot |
+ * 
|--||-|---|---|
+ * | hStoreFile0  | region1| hStore11| TIME_RANGE  
  | true  |
+ * | hStoreFile1  | region1| hStore12| NONE
  | true |
+ * | hStoreFile2  | region2| hStore21| TIME_RANGE  
  | true  |
+ * | hStoreFile3  | region2| hStore22| TIME_RANGE  
  | false |
+ */
+
+@Category({ RegionServerTests.class, SmallTests.class })
+public class TestDataTieringManager {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestDataTieringManager.class);
+
+  private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+  private static Configuration defaultConf;
+  private static FileSystem fs;
+  private static CacheConfig cacheConf;
+  private static Path testDir;
+  private static Map testOnlineRegions;
+
+  private static DataTieringManager dataTieringManager;
+  private static List hStoreFiles;
+
+  @BeforeClass
+  public static void setupBeforeClass() throws Exception {
+testDir = 
TEST_UTIL.getDataTestDir(TestDataTieringManager.class.getSimpleName());
+defaultConf = TEST_UTIL.getConfiguration();
+fs = HFileSystem.get(defaultConf);
+BlockCache blockCache = BlockCacheFactory.createBlockCache(defaultConf);
+cacheConf = new CacheConfig(defaultConf, blockCache);
+

Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-05 Thread via GitHub


kabhishek4 commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1553937551


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   I agree that setting the variation to 0 would remove the random seed and 
delay would be deterministic. However, the cause for value of variation is not 
getting set is seems to be different instances of conf in the test and in the 
static initializer block of prefetch executor. 
   
static {
   // Consider doing this on demand with a configuration passed in rather
   // than in a static initializer.
   Configuration conf = HBaseConfiguration.create();
 
   With new changes if the prefetch delay is set, it's value get reflected 
becuase conf is getting reloaded which is not the case with the variation.
   
   I am contemplating to code below to address this. Please let me know if you 
agree with the approach.
   
   1. Add new test only method to set the prefetchDelayVariation in the 
PrefetchExecutor class 
   2. Call it from the test to set to 0 and to unset 
   3. With this prefetchDelayVariation cannot be declared as final in the 
prefetch executor class
   4. Also, I am starting the timer abit late i.e. 
   HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
   long startTime = System.currentTimeMillis();
   hence wait for small amount of time, say 500 millisecs before testing 
the assertions.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040160407

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 24s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m  8s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 54s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   5m 59s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 51s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  25m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux c60b2519d3b1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040158699

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 24s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 41s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  23m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 01a20ca86362 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/testReport/
 |
   | Max. process+thread count | 161 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040148858

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 57s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 53s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 34s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  20m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5035c679db10 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/testReport/
 |
   | Max. process+thread count | 152 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28292 Make Delay prefetch property to be dynamically configured [hbase]

2024-04-05 Thread via GitHub


wchevreuil commented on code in PR #5605:
URL: https://github.com/apache/hbase/pull/5605#discussion_r1553921169


##
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java:
##
@@ -336,6 +337,62 @@ public void testPrefetchDoesntSkipRefs() throws Exception {
 });
   }
 
+  @Test
+  public void testOnConfigurationChange() {
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 4);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 4);
+
+// restore
+conf.setInt(PREFETCH_DELAY, 3);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+assertEquals(prefetchExecutorNotifier.getPrefetchDelay(), 3);
+
+conf.setInt(PREFETCH_DELAY, 1000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+  }
+
+  @Test
+  public void testPrefetchWithDelay() throws Exception {
+// Configure custom delay
+PrefetchExecutorNotifier prefetchExecutorNotifier = new 
PrefetchExecutorNotifier(conf);
+conf.setInt(PREFETCH_DELAY, 25000);
+prefetchExecutorNotifier.onConfigurationChange(conf);
+
+HFileContext context = new 
HFileContextBuilder().withCompression(Compression.Algorithm.GZ)
+  .withBlockSize(DATA_BLOCK_SIZE).build();
+Path storeFile = writeStoreFile("TestPrefetchWithDelay", context);
+
+HFile.Reader reader = HFile.createReader(fs, storeFile, cacheConf, true, 
conf);
+long startTime = System.currentTimeMillis();
+
+// Wait for 20 seconds, no thread should start prefetch
+Thread.sleep(2);
+assertFalse("Prefetch threads should not be running at this point", 
reader.prefetchStarted());
+while (!reader.prefetchStarted()) {
+  assertTrue("Prefetch delay has not been expired yet",
+getElapsedTime(startTime) < PrefetchExecutor.getPrefetchDelay());
+}
+
+// Prefech threads started working but not completed yet
+assertFalse(reader.prefetchComplete());
+
+// In prefetch executor, we further compute passed in delay using 
variation and a random
+// multiplier to get 'effective delay'. Hence, in the test, for delay of 
25000 milli-secs
+// check that prefetch is started after 2 milli-sec and prefetch 
started after that.
+// However, prefetch should not start after configured delay.
+if (reader.prefetchStarted()) {
+  LOG.info("elapsed time {}, Delay {}", getElapsedTime(startTime),
+PrefetchExecutor.getPrefetchDelay());
+  assertTrue("Prefetch should start post configured delay",
+getElapsedTime(startTime) <= PrefetchExecutor.getPrefetchDelay());
+}

Review Comment:
   Per the formula below, if you set it to 0, it should pick exactly the value 
of `hbase.hfile.prefetch.delay`. So either your test is not setting the config 
properly, or the dynamic config logic isn't working properly.
   
   `delay = (long) ((prefetchDelayMillis * (1.0f - 
(prefetchDelayVariation / 2)))
 + (prefetchDelayMillis * (prefetchDelayVariation / 2)
   * ThreadLocalRandom.current().nextFloat()));`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040142461

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 25s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 29s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  17m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 993966d02f74 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/testReport/
 |
   | Max. process+thread count | 168 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040097536

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 53s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 20s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 51s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  25m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux b3c84d656c69 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040092852

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 28s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  23m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 12ad0f696985 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/testReport/
 |
   | Max. process+thread count | 158 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040087900

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 53s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 55s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 26s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  20m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 61ef5ab3d97a 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/testReport/
 |
   | Max. process+thread count | 162 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5797:
URL: https://github.com/apache/hbase/pull/5797#issuecomment-2040080835

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 24s |  hbase-compression-zstd in the patch failed.  
|
   |  |   |  17m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5797 |
   | JIRA Issue | HBASE-28485 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c1e1abddcf14 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-compression_hbase-compression-zstd.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/testReport/
 |
   | Max. process+thread count | 168 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5797/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5793:
URL: https://github.com/apache/hbase/pull/5793#issuecomment-2040058881

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-28463 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  HBASE-28463 passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  HBASE-28463 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  HBASE-28463 passed  |
   | +1 :green_heart: |  spotless  |   0m 58s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m  5s |  HBASE-28463 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 58s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hbase-server: The patch 
generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4)  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 32s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | -1 :x: |  spotless  |   0m 46s |  patch has 46 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   2m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5793 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 7c12c5b4ff03 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-28463 / 28c1e3b2a6 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5793/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28485) Re-use ZstdDecompressCtx/ZstdCompressCtx for performance

2024-04-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HBASE-28485:
---
Labels: pull-request-available  (was: )

> Re-use ZstdDecompressCtx/ZstdCompressCtx for performance
> 
>
> Key: HBASE-28485
> URL: https://issues.apache.org/jira/browse/HBASE-28485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Charles Connell
>Assignee: Charles Connell
>Priority: Major
>  Labels: pull-request-available
> Attachments: async-prof-flamegraph-cpu_event-1712150670836-cpu.html, 
> async-prof-pid-1324144-cpu-1.html
>
>
> The zstd documentation recommends re-using context objects when possible, 
> because their creation has some expense. They can be more cheaply reset than 
> re-created. In {{ZstdDecompressor}} and {{ZstdCompressor}}, we create a new 
> context object for every call to {{decompress()}} and {{compress()}}. In CPU 
> profiles I've taken at my company, the constructor of {{ZstdDecompressCtx}} 
> can sometimes represent 10-25% of the time spent in zstd decompression, which 
> itself is 5-10% of a RegionServer's total CPU time. Avoiding this performance 
> penalty won't lead to any massive performance boost, but is a nice little win.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] HBASE-28485: Re-use ZstdDecompressCtx/ZstdCompressCtx for performance [hbase]

2024-04-05 Thread via GitHub


charlesconnell opened a new pull request, #5797:
URL: https://github.com/apache/hbase/pull/5797

   The zstd documentation recommends re-using context objects when possible, 
because their creation has some expense. They can be more cheaply reset than 
re-created. In this PR, create one `Zstd(De)compressCtx` for the lifetime of a 
`Compressor` or `Decompressor` object.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


gvprathyusha6 commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039800055

   > Could you please provide a UT or add an example in hbase-examples to show 
how to make use of this feature?
   > 
   > Thanks.
   
   @Apache9 I have added UT to parse and validate the new options for custom 
test class, can you review it please


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28491 Bump netty to 4.1.108.Final for addressing CVE-2024-29025 [hbase-thirdparty]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #114:
URL: https://github.com/apache/hbase-thirdparty/pull/114#issuecomment-2039786578

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 17s |  root in master failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in master failed.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in master failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 17s |  root in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 17s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 18s |  root in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 17s |  ASF License check generated no 
output?  |
   |  |   |   4m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/114 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile |
   | uname | Linux 6f3a22dcb618 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 GNU/Linux |
   | Build tool | maven |
   | git revision | master / 726f60d |
   | Default Java | Oracle Corporation-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/branch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/branch-javadoc-root.txt
 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/patch-compile-root.txt
 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/patch-javadoc-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/artifact/yetus-precommit-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/testReport/
 |
   | Max. process+thread count | 9 (vs. ulimit of 1000) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-Thirdparty-PreCommit/job/PR-114/1/console 
|
   | versions | git=2.20.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2039775766

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  master passed  |
   | +1 :green_heart: |  compile  |   5m 10s |  master passed  |
   | +1 :green_heart: |  spotless  |   1m 12s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m 45s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 52s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 42s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | dupname asflicense javac hadoopcheck spotless xml compile 
|
   | uname | Linux 427025b817ae 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/2/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] HBASE-28491 Bump netty to 4.1.108.Final for addressing CVE-2024-29025 [hbase-thirdparty]

2024-04-05 Thread via GitHub


nikita15p opened a new pull request, #114:
URL: https://github.com/apache/hbase-thirdparty/pull/114

Bump netty to 4.1.108.Final for addressing CVE-2024-29025


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Nikita Pande (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834292#comment-17834292
 ] 

Nikita Pande edited comment on HBASE-28491 at 4/5/24 1:13 PM:
--

Thanks [~zhangduo] . I just assigned to me and raised a PR


was (Author: JIRAUSER298527):
Thanks [~zhangduo] . I just assigned to me.

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Assignee: Nikita Pande
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039760269

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 58s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 14s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  44m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ed4204b74e7c 5.4.0-163-generic #180-Ubuntu SMP Tue Sep 5 
13:21:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/testReport/
 |
   | Max. process+thread count | 2807 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039747415

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 53s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 32s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  37m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a276dffe20cf 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/testReport/
 |
   | Max. process+thread count | 2905 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28493:
--
Component/s: dependencies
 thirdparty

> [hbase-thirdparty] Bump protobuf version
> 
>
> Key: HBASE-28493
> URL: https://issues.apache.org/jira/browse/HBASE-28493
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, Protobufs, thirdparty
>Reporter: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-05 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-28493:
-

 Summary: [hbase-thirdparty] Bump protobuf version
 Key: HBASE-28493
 URL: https://issues.apache.org/jira/browse/HBASE-28493
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28492:
--
Description: Like jetty, guava, etc.  (was: Like protobuf, jetty, guava, 
etc.)

> [hbase-thirdparty] Bump dependency versions before releasing
> 
>
> Key: HBASE-28492
> URL: https://issues.apache.org/jira/browse/HBASE-28492
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Like jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28493) [hbase-thirdparty] Bump protobuf version

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28493:
--
Component/s: Protobufs

> [hbase-thirdparty] Bump protobuf version
> 
>
> Key: HBASE-28493
> URL: https://issues.apache.org/jira/browse/HBASE-28493
> Project: HBase
>  Issue Type: Sub-task
>  Components: Protobufs
>Reporter: Duo Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039731786

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  13m 50s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  31m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 90606a87ab78 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/testReport/
 |
   | Max. process+thread count | 3213 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28488) Avoid expensive allocation in createRegionSpan

2024-04-05 Thread Thibault Deutsch (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibault Deutsch updated HBASE-28488:
-
Environment: 
Multiple clusters with:
 * OpenJDK 11.0.22+7 
 * HBase 2.5.7
 * 90-95% writes requests

  was:
Multiple clusters with:
 * OpenJDK 11.0.22+7 
 * HBase 2.5.7
 * 10-150 RegionServers
 * 90-95% writes requests


> Avoid expensive allocation in createRegionSpan
> --
>
> Key: HBASE-28488
> URL: https://issues.apache.org/jira/browse/HBASE-28488
> Project: HBase
>  Issue Type: Improvement
>  Components: tracing
>Affects Versions: 2.5.0
> Environment: Multiple clusters with:
>  * OpenJDK 11.0.22+7 
>  * HBase 2.5.7
>  * 90-95% writes requests
>Reporter: Thibault Deutsch
>Priority: Minor
> Attachments: 
> 0001-HBASE-28488-Use-encoded-name-in-region-span-attribut.patch, Screenshot 
> 2024-04-05 at 00.27.11.png
>
>
> On our busy clusters, the alloc profile shows that createRegionSpan() is 
> responsible for 15-20% of all the allocations. These allocations comes from 
> getRegionNameAsString().
> getRegionNameAsString() takes the region name and encode invisible characters 
> in their hex representation. This requires the use of a StringBuilder and 
> thus generate new strings every time.
> This becomes really expensive on a cluster with high number of requests. We 
> have a patch that replaced the call with getEncodedName() instead. It seems 
> better to just take the encoded region name (the md5 part) and use that in 
> trace attributes, because:
> - it's fixed in size (the full region name can be much longer depending on 
> the rowkey size),
> - it's enough information to link a trace to a region,
> - it doesn't require any new allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039723307

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 50s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 36s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 36s |  hbase-mapreduce generated 3 new + 194 
unchanged - 3 fixed = 197 total (was 197)  |
   | -0 :warning: |  checkstyle  |   0m 14s |  hbase-mapreduce: The patch 
generated 2 new + 27 unchanged - 0 fixed = 29 total (was 27)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 18s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 55s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  27m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux b1ff0b506d6d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-general-check/output/diff-compile-javac-hbase-mapreduce.txt
 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/artifact/yetus-general-check/output/diff-checkstyle-hbase-mapreduce.txt
 |
   | Max. process+thread count | 79 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/8/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Nikita Pande (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834292#comment-17834292
 ] 

Nikita Pande commented on HBASE-28491:
--

Thanks [~zhangduo] . I just assigned to me.

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Assignee: Nikita Pande
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Nikita Pande (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Pande reassigned HBASE-28491:


Assignee: Nikita Pande

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Assignee: Nikita Pande
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28483 backup merge fails on bulkloaded hfiles [hbase]

2024-04-05 Thread via GitHub


bbeaudreault commented on code in PR #5795:
URL: https://github.com/apache/hbase/pull/5795#discussion_r1553565029


##
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileInputFormat.java:
##
@@ -155,16 +154,23 @@ protected List listStatus(JobContext job) 
throws IOException {
 // since HFiles are written to directories where the
 // directory name is the column name

Review Comment:
   The comment here is out of date relative to the implementation now. It would 
be good to update it about the case where sometimes it might need to recurse 
further, such as when pointed at a table dir.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28483 backup merge fails on bulkloaded hfiles [hbase]

2024-04-05 Thread via GitHub


bbeaudreault commented on PR #5795:
URL: https://github.com/apache/hbase/pull/5795#issuecomment-2039713845

   Also please run `mvn spotless:apply -pl hbase-backups`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2039707363

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 40s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 47s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 385m 10s |  root in the patch failed.  |
   |  |   | 425m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux d026ea7e2b0b 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 6101bad5a3 |
   | Default Java | Temurin-1.8.0_352-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/testReport/
 |
   | Max. process+thread count | 5340 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834288#comment-17834288
 ] 

Duo Zhang commented on HBASE-28492:
---

Ah, protobuf has changed its versioning way...

https://protobuf.dev/support/version-support/

> [hbase-thirdparty] Bump dependency versions before releasing
> 
>
> Key: HBASE-28492
> URL: https://issues.apache.org/jira/browse/HBASE-28492
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Like protobuf, jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-28492 started by Duo Zhang.
-
> [hbase-thirdparty] Bump dependency versions before releasing
> 
>
> Key: HBASE-28492
> URL: https://issues.apache.org/jira/browse/HBASE-28492
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Like protobuf, jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834283#comment-17834283
 ] 

Duo Zhang commented on HBASE-28491:
---

[~nikitapande] FYI.

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-28492:
-

Assignee: Duo Zhang

> [hbase-thirdparty] Bump dependency versions before releasing
> 
>
> Key: HBASE-28492
> URL: https://issues.apache.org/jira/browse/HBASE-28492
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> Like protobuf, jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28491:
--
Component/s: dependencies
 security
 thirdparty

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28492:
--
Component/s: dependencies
 thirdparty

> [hbase-thirdparty] Bump dependency versions before releasing
> 
>
> Key: HBASE-28492
> URL: https://issues.apache.org/jira/browse/HBASE-28492
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, thirdparty
>Reporter: Duo Zhang
>Priority: Major
>
> Like protobuf, jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28492) [hbase-thirdparty] Bump dependency versions before releasing

2024-04-05 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-28492:
-

 Summary: [hbase-thirdparty] Bump dependency versions before 
releasing
 Key: HBASE-28492
 URL: https://issues.apache.org/jira/browse/HBASE-28492
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


Like protobuf, jetty, guava, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28491) Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-28491:
-

 Summary: Bump netty to 4.1.108.Final for addressing CVE-2024-29025
 Key: HBASE-28491
 URL: https://issues.apache.org/jira/browse/HBASE-28491
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28491) [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025

2024-04-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-28491:
--
Summary: [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing 
CVE-2024-29025  (was: Bump netty to 4.1.108.Final for addressing CVE-2024-29025)

> [hbase-thirdparty] Bump netty to 4.1.108.Final for addressing CVE-2024-29025
> 
>
> Key: HBASE-28491
> URL: https://issues.apache.org/jira/browse/HBASE-28491
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28490) [hbase-thirdparty] Release hbase-thirdparty 4.1.7

2024-04-05 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-28490:
-

 Summary: [hbase-thirdparty] Release hbase-thirdparty 4.1.7
 Key: HBASE-28490
 URL: https://issues.apache.org/jira/browse/HBASE-28490
 Project: HBase
  Issue Type: Umbrella
  Components: community, thirdparty
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28463) Time Based Priority for BucketCache

2024-04-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834276#comment-17834276
 ] 

Duo Zhang commented on HBASE-28463:
---

Better send an email with the design doc to dev list so others will know this 
new feature and help reviewing the design doc?

Thanks.

> Time Based Priority for BucketCache
> ---
>
> Key: HBASE-28463
> URL: https://issues.apache.org/jira/browse/HBASE-28463
> Project: HBase
>  Issue Type: New Feature
>  Components: BucketCache
>Reporter: Janardhan Hungund
>Assignee: Rahul Agarkar
>Priority: Major
>
> This Jira introduces the feature of time-based data tiering in HBase to 
> optimize storage efficiency and access performance by segregating data based 
> on its recency. By keeping recent data in the bucket cache (backed by faster 
> storage types like SSDs) and evicting older data, the system aims to provide 
> a more flexible control over the cache allocation and eviction logic via 
> configuration, allowing for defining time priorities for cached data. 
> The need for a more extensive cache allocation mechanism becomes even more 
> critical on HBase deployments where cache access reflects on significant 
> performance gains, such as when using cloud storage as the underlying file 
> system.
> The data is segregated into hot or cold categories based on its age. The 
> recent data within a specific time range (configured as hot-data-age) is 
> treated as hot and is stored in the cache, while the older data is stored and 
> accessed from the file system.
> This feature intends to provide the TCO gains by optimizing the utilization 
> of high cost bucket cache. Perfect fit for the use cases that have the 
> date-based data writes while the scans focus on the recently written data.
> Please find the detailed design document of the feature attached with the 
> Jira.
> Thanks,
> Janardhan



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache9 commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2039644392

   Actually this fix is not enough.
   
   This is just for netty dependency introduced transitively by other 
dependencies, in hbase we use the relocated netty in hbase-thirdparty. We need 
to make a new hbase-thirdparty first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28488) Avoid expensive allocation in createRegionSpan

2024-04-05 Thread Thibault Deutsch (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibault Deutsch updated HBASE-28488:
-
Attachment: 0001-HBASE-28488-Use-encoded-name-in-region-span-attribut.patch
Status: Patch Available  (was: Open)

> Avoid expensive allocation in createRegionSpan
> --
>
> Key: HBASE-28488
> URL: https://issues.apache.org/jira/browse/HBASE-28488
> Project: HBase
>  Issue Type: Improvement
>  Components: tracing
>Affects Versions: 2.5.0
> Environment: Multiple clusters with:
>  * OpenJDK 11.0.22+7 
>  * HBase 2.5.7
>  * 10-150 RegionServers
>  * 90-95% writes requests
>Reporter: Thibault Deutsch
>Priority: Minor
> Attachments: 
> 0001-HBASE-28488-Use-encoded-name-in-region-span-attribut.patch, Screenshot 
> 2024-04-05 at 00.27.11.png
>
>
> On our busy clusters, the alloc profile shows that createRegionSpan() is 
> responsible for 15-20% of all the allocations. These allocations comes from 
> getRegionNameAsString().
> getRegionNameAsString() takes the region name and encode invisible characters 
> in their hex representation. This requires the use of a StringBuilder and 
> thus generate new strings every time.
> This becomes really expensive on a cluster with high number of requests. We 
> have a patch that replaced the call with getEncodedName() instead. It seems 
> better to just take the encoded region name (the md5 part) and use that in 
> trace attributes, because:
> - it's fixed in size (the full region name can be much longer depending on 
> the rowkey size),
> - it's enough information to link a trace to a region,
> - it doesn't require any new allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28458) BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached

2024-04-05 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-28458.
--
Resolution: Fixed

Merged into master, branch-3, branch-2 and branch-2.6. Thanks for reviewing it 
[~zhangduo] [~psomogyi] !

> BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully 
> cached
> ---
>
> Key: HBASE-28458
> URL: https://issues.apache.org/jira/browse/HBASE-28458
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0, 4.0.0-alpha-1, 2.7.0
>
>
> Noticed that 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning was 
> flakey, failing whenever the block eviction happened while prefetch was still 
> ongoing.
> In the test, we pass an instance of BucketCache directly to the cache config, 
> so the test is actually placing both data and meta blocks in the bucket 
> cache. So sometimes, the test call BucketCache.notifyFileCachingCompleted 
> after the it has already evicted two blocks.  
> Inside BucketCache.notifyFileCachingCompleted, we iterate through the 
> backingMap entry set, counting number of blocks for the given file. Then, to 
> consider whether the file is fully cached or not, we do the following 
> validation:
> {noformat}
> if (dataBlockCount == count.getValue() || totalBlockCount == 
> count.getValue()) {
>   LOG.debug("File {} has now been fully cached.", fileName);
>   fileCacheCompleted(fileName, size);
> }  {noformat}
> But the test generates 57 total blocks, 55 data and 2 meta blocks. It evicts 
> two blocks and asserts that the file hasn't been considered fully cached. 
> When these evictions happen while prefetch is still going, we'll pass that 
> check, as the the number of blocks for the file in the backingMap would still 
> be 55, which is what we pass as dataBlockCount.
> As BucketCache is intended for storing data blocks only, I believe we should 
> make sure BucketCache.notifyFileCachingCompleted only accounts for data 
> blocks. Also, the 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning should 
> be updated to consistently reproduce the eviction concurrent to the prefetch. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28458) BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached

2024-04-05 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-28458:
-
Affects Version/s: (was: 2.6.1)

> BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully 
> cached
> ---
>
> Key: HBASE-28458
> URL: https://issues.apache.org/jira/browse/HBASE-28458
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0, 4.0.0-alpha-1, 2.7.0
>
>
> Noticed that 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning was 
> flakey, failing whenever the block eviction happened while prefetch was still 
> ongoing.
> In the test, we pass an instance of BucketCache directly to the cache config, 
> so the test is actually placing both data and meta blocks in the bucket 
> cache. So sometimes, the test call BucketCache.notifyFileCachingCompleted 
> after the it has already evicted two blocks.  
> Inside BucketCache.notifyFileCachingCompleted, we iterate through the 
> backingMap entry set, counting number of blocks for the given file. Then, to 
> consider whether the file is fully cached or not, we do the following 
> validation:
> {noformat}
> if (dataBlockCount == count.getValue() || totalBlockCount == 
> count.getValue()) {
>   LOG.debug("File {} has now been fully cached.", fileName);
>   fileCacheCompleted(fileName, size);
> }  {noformat}
> But the test generates 57 total blocks, 55 data and 2 meta blocks. It evicts 
> two blocks and asserts that the file hasn't been considered fully cached. 
> When these evictions happen while prefetch is still going, we'll pass that 
> check, as the the number of blocks for the file in the backingMap would still 
> be 55, which is what we pass as dataBlockCount.
> As BucketCache is intended for storing data blocks only, I believe we should 
> make sure BucketCache.notifyFileCachingCompleted only accounts for data 
> blocks. Also, the 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning should 
> be updated to consistently reproduce the eviction concurrent to the prefetch. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28458) BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached

2024-04-05 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-28458:
-
Fix Version/s: 2.6.0
   3.0.0
   2.7.0

> BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully 
> cached
> ---
>
> Key: HBASE-28458
> URL: https://issues.apache.org/jira/browse/HBASE-28458
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0, 2.6.1
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.6.0, 3.0.0, 4.0.0-alpha-1, 2.7.0
>
>
> Noticed that 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning was 
> flakey, failing whenever the block eviction happened while prefetch was still 
> ongoing.
> In the test, we pass an instance of BucketCache directly to the cache config, 
> so the test is actually placing both data and meta blocks in the bucket 
> cache. So sometimes, the test call BucketCache.notifyFileCachingCompleted 
> after the it has already evicted two blocks.  
> Inside BucketCache.notifyFileCachingCompleted, we iterate through the 
> backingMap entry set, counting number of blocks for the given file. Then, to 
> consider whether the file is fully cached or not, we do the following 
> validation:
> {noformat}
> if (dataBlockCount == count.getValue() || totalBlockCount == 
> count.getValue()) {
>   LOG.debug("File {} has now been fully cached.", fileName);
>   fileCacheCompleted(fileName, size);
> }  {noformat}
> But the test generates 57 total blocks, 55 data and 2 meta blocks. It evicts 
> two blocks and asserts that the file hasn't been considered fully cached. 
> When these evictions happen while prefetch is still going, we'll pass that 
> check, as the the number of blocks for the file in the backingMap would still 
> be 55, which is what we pass as dataBlockCount.
> As BucketCache is intended for storing data blocks only, I believe we should 
> make sure BucketCache.notifyFileCachingCompleted only accounts for data 
> blocks. Also, the 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning should 
> be updated to consistently reproduce the eviction concurrent to the prefetch. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-28458) BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached

2024-04-05 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-28458:
-
Affects Version/s: 2.6.1

> BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully 
> cached
> ---
>
> Key: HBASE-28458
> URL: https://issues.apache.org/jira/browse/HBASE-28458
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 3.0.0-beta-1, 4.0.0-alpha-1, 2.7.0, 2.6.1
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>
> Noticed that 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning was 
> flakey, failing whenever the block eviction happened while prefetch was still 
> ongoing.
> In the test, we pass an instance of BucketCache directly to the cache config, 
> so the test is actually placing both data and meta blocks in the bucket 
> cache. So sometimes, the test call BucketCache.notifyFileCachingCompleted 
> after the it has already evicted two blocks.  
> Inside BucketCache.notifyFileCachingCompleted, we iterate through the 
> backingMap entry set, counting number of blocks for the given file. Then, to 
> consider whether the file is fully cached or not, we do the following 
> validation:
> {noformat}
> if (dataBlockCount == count.getValue() || totalBlockCount == 
> count.getValue()) {
>   LOG.debug("File {} has now been fully cached.", fileName);
>   fileCacheCompleted(fileName, size);
> }  {noformat}
> But the test generates 57 total blocks, 55 data and 2 meta blocks. It evicts 
> two blocks and asserts that the file hasn't been considered fully cached. 
> When these evictions happen while prefetch is still going, we'll pass that 
> check, as the the number of blocks for the file in the backingMap would still 
> be 55, which is what we pass as dataBlockCount.
> As BucketCache is intended for storing data blocks only, I believe we should 
> make sure BucketCache.notifyFileCachingCompleted only accounts for data 
> blocks. Also, the 
> TestBucketCachePersister.testPrefetchBlockEvictionWhilePrefetchRunning should 
> be updated to consistently reproduce the eviction concurrent to the prefetch. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28488) Avoid expensive allocation in createRegionSpan

2024-04-05 Thread Thibault Deutsch (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834266#comment-17834266
 ] 

Thibault Deutsch commented on HBASE-28488:
--

I would like to upstream the patch that we have, if people are okay with the 
change.

I couldn't find any contributing guidelines that explains the process followed 
by HBase, so I'm trying to just follow what I have seen done in other Jira 
ticket. Let me know if there is anything I need to do!

> Avoid expensive allocation in createRegionSpan
> --
>
> Key: HBASE-28488
> URL: https://issues.apache.org/jira/browse/HBASE-28488
> Project: HBase
>  Issue Type: Improvement
>  Components: tracing
>Affects Versions: 2.5.0
> Environment: Multiple clusters with:
>  * OpenJDK 11.0.22+7 
>  * HBase 2.5.7
>  * 10-150 RegionServers
>  * 90-95% writes requests
>Reporter: Thibault Deutsch
>Priority: Minor
> Attachments: Screenshot 2024-04-05 at 00.27.11.png
>
>
> On our busy clusters, the alloc profile shows that createRegionSpan() is 
> responsible for 15-20% of all the allocations. These allocations comes from 
> getRegionNameAsString().
> getRegionNameAsString() takes the region name and encode invisible characters 
> in their hex representation. This requires the use of a StringBuilder and 
> thus generate new strings every time.
> This becomes really expensive on a cluster with high number of requests. We 
> have a patch that replaced the call with getEncodedName() instead. It seems 
> better to just take the encoded region name (the md5 part) and use that in 
> trace attributes, because:
> - it's fixed in size (the full region name can be much longer depending on 
> the rowkey size),
> - it's enough information to link a trace to a region,
> - it doesn't require any new allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039588020

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  6s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  14m  4s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  33m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6801538a4bf3 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/testReport/
 |
   | Max. process+thread count | 2932 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039586613

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 16s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  13m  0s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  32m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b3f3eefd7889 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/testReport/
 |
   | Max. process+thread count | 2965 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039584697

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 16s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  13m 55s |  hbase-mapreduce in the patch 
passed.  |
   |  |   |  32m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4a6480b46a98 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Temurin-1.8.0_352-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/testReport/
 |
   | Max. process+thread count | 3168 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-05 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated HBASE-28489:

Description: 
The REST server (and java client) currently does not implement sessions.

While is not  necessary for the REST API to work, implementing sessions would 
be a big improvement in throughput and resource usage.

* It would make load balancing with sticky sessions possible
* It would save the overhead of performing authentication for each request

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perfomed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.

  was:
The REST server (and java client) currently does not implement sessions.

While is not seem to necessary for the REST API to work, implementing sessions 
would be a big improvement in throughput and resource usage.

* It would make balancing with sticky sessions possible
* It would save the overhead of performing authentication for each call

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perromed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.


> Implement HTTP session support in REST server and client
> 
>
> Key: HBASE-28489
> URL: https://issues.apache.org/jira/browse/HBASE-28489
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The REST server (and java client) currently does not implement sessions.
> While is not  necessary for the REST API to work, implementing sessions would 
> be a big improvement in throughput and resource usage.
> * It would make load balancing with sticky sessions possible
> * It would save the overhead of performing authentication for each request
>  The gains are particularly big when using SPENGO:
> * The full SPENGO handshake can be skipped for subsequent requests
> * When Knox performs SPENGO authentication for the proxied client, it access 
> the identity store each time. When the session is set, this step is only 
> perfomed on the initial request.
> The same change has resulted in spectacular performance improvements for 
> Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27938) Enable PE to load any custom implementation of tests at runtime

2024-04-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HBASE-27938:
---
Labels: pull-request-available  (was: )

> Enable PE to load any custom implementation of tests at runtime
> ---
>
> Key: HBASE-27938
> URL: https://issues.apache.org/jira/browse/HBASE-27938
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Prathyusha
>Assignee: Prathyusha
>Priority: Minor
>  Labels: pull-request-available
>
> Right now to add any custom PE.Test implementation it has to have a compile 
> time dependency of those new test classes in PE, this is to enable PE to load 
> any custom impl of tests at runtime and utilise PE framework for any custom 
> implementations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-27938 - PE load any custom implementation of tests at runtime [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5307:
URL: https://github.com/apache/hbase/pull/5307#issuecomment-2039577198

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 59s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 40s |  hbase-mapreduce generated 2 new + 195 
unchanged - 2 fixed = 197 total (was 197)  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hbase-mapreduce: The patch 
generated 2 new + 27 unchanged - 0 fixed = 29 total (was 27)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 21s |  Patch does not cause any 
errors with Hadoop 3.3.6.  |
   | +1 :green_heart: |  spotless  |   0m 48s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  27m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5307 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 56f080f831f5 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / aea7e7c85c |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-general-check/output/diff-compile-javac-hbase-mapreduce.txt
 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/artifact/yetus-general-check/output/diff-checkstyle-hbase-mapreduce.txt
 |
   | Max. process+thread count | 77 (vs. ulimit of 3) |
   | modules | C: hbase-mapreduce U: hbase-mapreduce |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5307/7/console 
|
   | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-28489) Implement HTTP session support in REST server and client

2024-04-05 Thread Istvan Toth (Jira)
Istvan Toth created HBASE-28489:
---

 Summary: Implement HTTP session support in REST server and client
 Key: HBASE-28489
 URL: https://issues.apache.org/jira/browse/HBASE-28489
 Project: HBase
  Issue Type: Improvement
  Components: REST
Reporter: Istvan Toth
Assignee: Istvan Toth


The REST server (and java client) currently does not implement sessions.

While is not seem to necessary for the REST API to work, implementing sessions 
would be a big improvement in throughput and resource usage.

* It would make balancing with sticky sessions possible
* It would save the overhead of performing authentication for each call

 The gains are particularly big when using SPENGO:

* The full SPENGO handshake can be skipped for subsequent requests
* When Knox performs SPENGO authentication for the proxied client, it access 
the identity store each time. When the session is set, this step is only 
perromed on the initial request.

The same change has resulted in spectacular performance improvements for 
Phoenix Query Server when implemented in Avatica.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28488) Avoid expensive allocation in createRegionSpan

2024-04-05 Thread Thibault Deutsch (Jira)
Thibault Deutsch created HBASE-28488:


 Summary: Avoid expensive allocation in createRegionSpan
 Key: HBASE-28488
 URL: https://issues.apache.org/jira/browse/HBASE-28488
 Project: HBase
  Issue Type: Improvement
  Components: tracing
Affects Versions: 2.5.0
 Environment: Multiple clusters with:
 * OpenJDK 11.0.22+7 
 * HBase 2.5.7
 * 10-150 RegionServers
 * 90-95% writes requests
Reporter: Thibault Deutsch
 Attachments: Screenshot 2024-04-05 at 00.27.11.png

On our busy clusters, the alloc profile shows that createRegionSpan() is 
responsible for 15-20% of all the allocations. These allocations comes from 
getRegionNameAsString().

getRegionNameAsString() takes the region name and encode invisible characters 
in their hex representation. This requires the use of a StringBuilder and thus 
generate new strings every time.

This becomes really expensive on a cluster with high number of requests. We 
have a patch that replaced the call with getEncodedName() instead. It seems 
better to just take the encoded region name (the md5 part) and use that in 
trace attributes, because:
- it's fixed in size (the full region name can be much longer depending on the 
rowkey size),
- it's enough information to link a trace to a region,
- it doesn't require any new allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2039547667

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 56s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 311m 35s |  root in the patch passed.  |
   |  |   | 343m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1ad50ed4cbdc 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 
23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 6101bad5a3 |
   | Default Java | Eclipse Adoptium-11.0.17+8 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/testReport/
 |
   | Max. process+thread count | 8651 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


vinayakphegde commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1553348860


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java:
##
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.OptionalLong;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class DataTieringManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataTieringManager.class);
+  public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type";
+  public static final String DATATIERING_HOT_DATA_AGE_KEY =
+"hbase.hstore.datatiering.hot.age.millis";
+  public static final DataTieringType DEFAULT_DATATIERING = 
DataTieringType.NONE;
+  public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 
* 1000; // 7 Days
+  private static DataTieringManager instance;
+  private final Map onlineRegions;
+
+  private DataTieringManager(Map onlineRegions) {
+this.onlineRegions = onlineRegions;
+  }
+
+  public static synchronized void instantiate(Map 
onlineRegions) {
+if (instance == null) {
+  instance = new DataTieringManager(onlineRegions);
+  LOG.info("DataTieringManager instantiated successfully.");
+} else {
+  LOG.warn("DataTieringManager is already instantiated.");
+}
+  }
+
+  public static synchronized DataTieringManager getInstance() {
+if (instance == null) {
+  throw new IllegalStateException(
+"DataTieringManager has not been instantiated. Call instantiate() 
first.");
+}
+return instance;
+  }
+
+  public boolean isDataTieringEnabled(BlockCacheKey key) throws 
DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isDataTieringEnabled(hFilePath);
+  }
+
+  public boolean isDataTieringEnabled(Path hFilePath) throws 
DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+return !dataTieringType.equals(DataTieringType.NONE);
+  }
+
+  public boolean isHotData(BlockCacheKey key) throws DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isHotData(hFilePath);
+  }
+
+  public boolean isHotData(Path hFilePath) throws DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+
+if (dataTieringType.equals(DataTieringType.TIME_RANGE)) {
+  long hotDataAge = getDataTieringHotDataAge(configuration);
+
+  HStoreFile hStoreFile = getHStoreFile(hFilePath);
+  if (hStoreFile == null) {
+throw new DataTieringException(
+  "HStoreFile corresponding to " + hFilePath + " doesn't exist");
+  }
+  OptionalLong maxTimestamp = hStoreFile.getMaximumTimestamp();
+  if (!maxTimestamp.isPresent()) {
+throw new DataTieringException("Maximum timestamp not present for " + 
hFilePath);
+  }
+
+  long currentTimestamp = 
EnvironmentEdgeManager.getDelegate().currentTime();
+  long diff = currentTimestamp - maxTimestamp.getAsLong();
+  return diff <= hotDataAge;
+}
+return false;

Review Comment:
   Okay, I get your point. then we can eliminate isDataTieringEnabled methods, 
right? since we are considering everything as hot by default.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[jira] [Work started] (HBASE-28468) Integration of time-based priority caching logic into cache evictions.

2024-04-05 Thread Janardhan Hungund (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-28468 started by Janardhan Hungund.
-
> Integration of time-based priority caching logic into cache evictions.
> --
>
> Key: HBASE-28468
> URL: https://issues.apache.org/jira/browse/HBASE-28468
> Project: HBase
>  Issue Type: Task
>Reporter: Janardhan Hungund
>Assignee: Janardhan Hungund
>Priority: Major
>
> When the time-based priority caching is enabled, then, the block evictions 
> triggered when the cache is full, should use the time-based priority caching 
> framework APIs to detect the cold files and evict the blocks of those files 
> first. This ensures that the hot data remains in cache while the cold data is 
> evicted from cache.
> Thanks,
> Janardhan



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28468) Integration of time-based priority caching logic into cache evictions.

2024-04-05 Thread Janardhan Hungund (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janardhan Hungund reassigned HBASE-28468:
-

Assignee: Janardhan Hungund

> Integration of time-based priority caching logic into cache evictions.
> --
>
> Key: HBASE-28468
> URL: https://issues.apache.org/jira/browse/HBASE-28468
> Project: HBase
>  Issue Type: Task
>Reporter: Janardhan Hungund
>Assignee: Janardhan Hungund
>Priority: Major
>
> When the time-based priority caching is enabled, then, the block evictions 
> triggered when the cache is full, should use the time-based priority caching 
> framework APIs to detect the cold files and evict the blocks of those files 
> first. This ensures that the hot data remains in cache while the cold data is 
> evicted from cache.
> Thanks,
> Janardhan



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HBASE-28486 fix CVE-2024-29025 in netty package [hbase]

2024-04-05 Thread via GitHub


Apache-HBase commented on PR #5794:
URL: https://github.com/apache/hbase/pull/5794#issuecomment-2039410442

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 13s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  4s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 14s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 243m 49s |  root in the patch failed.  |
   |  |   | 273m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/5794 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 91ecce2b27f8 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 
23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 6101bad5a3 |
   | Default Java | Eclipse Adoptium-17.0.10+7 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/artifact/yetus-jdk17-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/testReport/
 |
   | Max. process+thread count | 6398 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5794/1/console 
|
   | versions | git=2.34.1 maven=3.8.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HBASE-28465 Implementation of framework for time-based priority bucket-cache [hbase]

2024-04-05 Thread via GitHub


vinayakphegde commented on code in PR #5793:
URL: https://github.com/apache/hbase/pull/5793#discussion_r1553318202


##
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java:
##
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.OptionalLong;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class DataTieringManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataTieringManager.class);
+  public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type";
+  public static final String DATATIERING_HOT_DATA_AGE_KEY =
+"hbase.hstore.datatiering.hot.age.millis";
+  public static final DataTieringType DEFAULT_DATATIERING = 
DataTieringType.NONE;
+  public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 
* 1000; // 7 Days
+  private static DataTieringManager instance;
+  private final Map onlineRegions;
+
+  private DataTieringManager(Map onlineRegions) {
+this.onlineRegions = onlineRegions;
+  }
+
+  public static synchronized void instantiate(Map 
onlineRegions) {
+if (instance == null) {
+  instance = new DataTieringManager(onlineRegions);
+  LOG.info("DataTieringManager instantiated successfully.");
+} else {
+  LOG.warn("DataTieringManager is already instantiated.");
+}
+  }
+
+  public static synchronized DataTieringManager getInstance() {
+if (instance == null) {
+  throw new IllegalStateException(
+"DataTieringManager has not been instantiated. Call instantiate() 
first.");
+}
+return instance;
+  }
+
+  public boolean isDataTieringEnabled(BlockCacheKey key) throws 
DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isDataTieringEnabled(hFilePath);
+  }
+
+  public boolean isDataTieringEnabled(Path hFilePath) throws 
DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+return !dataTieringType.equals(DataTieringType.NONE);
+  }
+
+  public boolean isHotData(BlockCacheKey key) throws DataTieringException {
+Path hFilePath = key.getFilePath();
+if (hFilePath == null) {
+  throw new DataTieringException("BlockCacheKey Doesn't Contain HFile 
Path");
+}
+return isHotData(hFilePath);
+  }
+
+  public boolean isHotData(Path hFilePath) throws DataTieringException {
+Configuration configuration = getConfiguration(hFilePath);
+DataTieringType dataTieringType = getDataTieringType(configuration);
+
+if (dataTieringType.equals(DataTieringType.TIME_RANGE)) {
+  long hotDataAge = getDataTieringHotDataAge(configuration);
+
+  HStoreFile hStoreFile = getHStoreFile(hFilePath);
+  if (hStoreFile == null) {
+throw new DataTieringException(
+  "HStoreFile corresponding to " + hFilePath + " doesn't exist");
+  }
+  OptionalLong maxTimestamp = hStoreFile.getMaximumTimestamp();
+  if (!maxTimestamp.isPresent()) {
+throw new DataTieringException("Maximum timestamp not present for " + 
hFilePath);
+  }
+
+  long currentTimestamp = 
EnvironmentEdgeManager.getDelegate().currentTime();
+  long diff = currentTimestamp - maxTimestamp.getAsLong();
+  return diff <= hotDataAge;
+}
+return false;

Review Comment:
   Even with this implementation, it will function the same. For example, in 
eviction, we'll prioritize evicting the cold data files first, which are all 
the files that satisfy the condition (isDataTieringEnabled(file) && 
!isHotData(file)). Once I identify these 

Re: [PR] [ADDENDUM] HBASE-28458 BucketCache.notifyFileCachingCompleted may incorrectly consider a file fully cached (#5777) [hbase]

2024-04-05 Thread via GitHub


wchevreuil merged PR #5791:
URL: https://github.com/apache/hbase/pull/5791


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >