[GitHub] [hbase] Apache-HBase commented on pull request #4042: HBASE-26660 delayed FlushRegionEntry should be removed when we need a non-delayed one

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4042:
URL: https://github.com/apache/hbase/pull/4042#issuecomment-1016153074


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 10s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 49s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 56s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 130m 53s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 174m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4042 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 47bfaf26d6d8 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-agent/workspace/Base-PreCommit-GitHub-PR_PR-4042/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / a3e7d36f2e |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/3/testReport/
 |
   | Max. process+thread count | 4187 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/3/console
 |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4031: HBASE-26661 remove deprecated methods in MasterObserver

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4031:
URL: https://github.com/apache/hbase/pull/4031#issuecomment-1016133182


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 21s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 138m 36s |  hbase-server in the patch passed.  
|
   |  |   | 170m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4031/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4031 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 3f98d8b0cb63 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4031/4/testReport/
 |
   | Max. process+thread count | 4424 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4031/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3969: HBASE-26614 Refactor code related to "dump"ing ZK nodes

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #3969:
URL: https://github.com/apache/hbase/pull/3969#issuecomment-1016128660


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 16s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 46s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 154m 26s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   7m 15s |  hbase-shell in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 12s |  hbase-it in the patch passed.  |
   |  |   | 200m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3969 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c36c9335984a 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/testReport/
 |
   | Max. process+thread count | 3564 (vs. ulimit of 3) |
   | modules | C: hbase-zookeeper hbase-server hbase-shell hbase-it U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26682) Shakti Packers and Movers

2022-01-18 Thread shaktimovers (Jira)
shaktimovers created HBASE-26682:


 Summary: Shakti Packers and Movers 
 Key: HBASE-26682
 URL: https://issues.apache.org/jira/browse/HBASE-26682
 Project: HBase
  Issue Type: Bug
Reporter: shaktimovers


Shakti movers and packers in Gurugram are a team of professionals who assist 
their customers in having a stress-free moving experience. We ensure the safety 
of your goods and provide prompt transportation at a low cost to our clients. 
Our services are delivered on schedule and in accordance with the client's 
requirements.

 

[Packers and movers in gurugram|https://moversandpackersingurugram.com/]

[Packers and movers 
gurugram|https://moversandpackersingurugram.com/movers-and-packers-in-gurugram/]

[Movers and packers in 
gurugram|https://moversandpackersingurugram.com/packers-and-movers-in-gurugram/]
 

[Movers and packers 
gurugram|https://moversandpackersingurugram.com/packers-and-movers-gurugram/]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] Apache9 merged pull request #4040: HBASE-26674 Should modify filesCompacting under storeWriteLock

2022-01-18 Thread GitBox


Apache9 merged pull request #4040:
URL: https://github.com/apache/hbase/pull/4040


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

The operation distribution of YCSB workload is latest.

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.

As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
memory allocation, the RAMBuffer may not have GC improvement as much as 
branch-1. 


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

The operation distribution of YCSB workload is latest.

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.

As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
memory allocation, the RAMBuffer may not have GC improvement as much as branch-1



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> The operation distribution of YCSB workload is latest.
> Client Side Metrics
> See the attachment ClientSideMetrics.png
> Server Side GC:
> The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
> With RAMBuffer, the server side had 210 times GC and 2.56 

[GitHub] [hbase] Apache-HBase commented on pull request #4024: HBASE-26521 Name RPC spans as `$package.$service/$method`

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4024:
URL: https://github.com/apache/hbase/pull/4024#issuecomment-1016086661


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 30s |  master passed  |
   | +1 :green_heart: |  compile  |   9m 24s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 19s |  master passed  |
   | +1 :green_heart: |  spotbugs  |  16m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |  10m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 44s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |  16m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 100m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4024 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile xml |
   | uname | Linux 06df152168ea 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 126 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4043: HBASE-26681 Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4043:
URL: https://github.com/apache/hbase/pull/4043#issuecomment-1016079825


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  12m 48s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 51s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 55s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 40s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m 22s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 51s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 44s |  hbase-server: The patch generated 1 
new + 16 unchanged - 0 fixed = 17 total (was 16)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 58s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 164m 13s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 215m 12s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
   |   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
   |   | hadoop.hbase.TestPartialResultsFromClientSide |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4043/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4043 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 2e27082b906b 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-4043/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / a3e7d36f2e |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4043/2/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4043/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4043/2/testReport/
 |
   | Max. process+thread count | 4509 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console 

[GitHub] [hbase] Apache-HBase commented on pull request #4031: HBASE-26661 remove deprecated methods in MasterObserver

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4031:
URL: https://github.com/apache/hbase/pull/4031#issuecomment-1016079551


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 10s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 47s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 47s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 59s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 17s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  57m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4031/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4031 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux fd82bd3c35af 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4031/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3969: HBASE-26614 Refactor code related to "dump"ing ZK nodes

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #3969:
URL: https://github.com/apache/hbase/pull/3969#issuecomment-1016065088


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  master passed  |
   | +1 :green_heart: |  compile  |   5m  1s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  1s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 30s |  hbase-zookeeper generated 5 new + 87 
unchanged - 6 fixed = 92 total (was 93)  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  rubocop  |   0m  9s |  There were no new rubocop 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 21s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |   4m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3969 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile rubocop |
   | uname | Linux b3ecddcb788d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/artifact/yetus-general-check/output/diff-compile-javac-hbase-zookeeper.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-zookeeper hbase-server hbase-shell hbase-it U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/9/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on a change in pull request #4039: HBASE-26679 Wait on the future returned by FanOutOneBlockAsyncDFSOutp…

2022-01-18 Thread GitBox


comnetwork commented on a change in pull request #4039:
URL: https://github.com/apache/hbase/pull/4039#discussion_r787323310



##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java
##
@@ -231,7 +231,11 @@ private void completed(Channel channel) {
   // so that the implementation will not burn up our brain as there are 
multiple state changes and
   // checks.
   private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
-if (state == State.BROKEN || state == State.CLOSED) {
+if (state == State.CLOSED) {
+  return;
+}
+if (state == State.BROKEN) {
+  failWaitingAckQueue(channel, errorSupplier);

Review comment:
   @Apache9, thank you for the suggestion for the test. Assuming dn2 and 
dn3 are slow DNs, the simple way I can think  to simulate DNs is to discard the 
message when flushing to dn2 and dn3. Seems it is hard to simulate the slow 
response from dn2 and dn3, because it seems to require hack to Netty 
implementation and seems more complex, and because Netty event loop is a single 
thread, I could not block in it to hang other messages. What is your opinion?
   And Because the `FanOutOneBlockAsyncDFSOutput` is created by 
`FanOutOneBlockAsyncDFSOutputHelper.createOutput`, seems could just only 
Mockito `FanOutOneBlockAsyncDFSOutputHelper.createOutput`'s input parameter.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26640) Reimplement master location region initialization to better work with SFT

2022-01-18 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17478322#comment-17478322
 ] 

Duo Zhang commented on HBASE-26640:
---

The problem is we will initialize FSTableDescriptors before starting the 
procedure store so we have no chance to delete the half done table descriptor 
file while loading FSTableDescriptors. So FSTableDescritpor itself needs to 
deal with this problem.

So now, if we want to write to the actual table dir directly, then in 
FSTableDescriptors, we need to deal with the half done table descriptor file by 
ourselves.

> Reimplement master location region initialization to better work with SFT
> -
>
> Key: HBASE-26640
> URL: https://issues.apache.org/jira/browse/HBASE-26640
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, RegionProcedureStore
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> It is not like a normal region where we have a TableDescriptor so it can 
> store the SFT implementation of its own. In the current implementation, if we 
> change the global SFT configuration, the SFT implementation of the master 
> local reigon will be changed and cause data loss.
> First I think we could hard coded it to use DefaultSFT. The region is small 
> and will not cause too much performance impact. Then we could find a way to 
> manage the SFT implementation of it.
> == Update ==
> The initialization of master local region depends on renaming, which can not 
> work well on OSS. So we should also change it. The basic idea is to touch a 
> '.initialized' file to indicate it is initialized. Need to consider how to 
> migrate from the existing master local region where it does not have this 
> file.
> And we could also store the TableDescriptor on file system, so we can 
> determine whether this is a SFT change. If so, we should do the migration 
> before actually opening the master local region.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] comnetwork commented on a change in pull request #4039: HBASE-26679 Wait on the future returned by FanOutOneBlockAsyncDFSOutp…

2022-01-18 Thread GitBox


comnetwork commented on a change in pull request #4039:
URL: https://github.com/apache/hbase/pull/4039#discussion_r787323310



##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java
##
@@ -231,7 +231,11 @@ private void completed(Channel channel) {
   // so that the implementation will not burn up our brain as there are 
multiple state changes and
   // checks.
   private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
-if (state == State.BROKEN || state == State.CLOSED) {
+if (state == State.CLOSED) {
+  return;
+}
+if (state == State.BROKEN) {
+  failWaitingAckQueue(channel, errorSupplier);

Review comment:
   @Apache9, thank you for the suggestion for the test. Assuming dn2 and 
dn3 are slow DNs, the simple way I can think  to simulate DNs is to discard the 
message when flushing to dn2 and dn3. Seems it is hard to simulate the slow 
response from dn2 and dn3, because it seems to require hack to Netty 
implementation and seems more complex, and because Netty event loop is a single 
thread, I could not block in it to hang other messages. What is your opinion?
   And Because the `FanOutOneBlockAsyncDFSOutput` is created by 
`FanOutOneBlockAsyncDFSOutputHelper.createOutput`, seems could just Mockito 
`FanOutOneBlockAsyncDFSOutputHelper.createOutput`'s input parameter.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on a change in pull request #4039: HBASE-26679 Wait on the future returned by FanOutOneBlockAsyncDFSOutp…

2022-01-18 Thread GitBox


comnetwork commented on a change in pull request #4039:
URL: https://github.com/apache/hbase/pull/4039#discussion_r787323310



##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java
##
@@ -231,7 +231,11 @@ private void completed(Channel channel) {
   // so that the implementation will not burn up our brain as there are 
multiple state changes and
   // checks.
   private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
-if (state == State.BROKEN || state == State.CLOSED) {
+if (state == State.CLOSED) {
+  return;
+}
+if (state == State.BROKEN) {
+  failWaitingAckQueue(channel, errorSupplier);

Review comment:
   @Apache9, thank you for the suggestion for the test. Assuming dn2 and 
dn3 are slow DNs, the simple way I can think  to simulate DNs is to discard the 
message when flushing to dn2 and dn3. Seems it is hard to simulate the slow 
response from dn2 and dn3, because it seems to require hack to Netty 
implementation and seems more complex, and because Netty event loop is a single 
thread, I could not block in it to hang other messages. What is your opinion?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on a change in pull request #4039: HBASE-26679 Wait on the future returned by FanOutOneBlockAsyncDFSOutp…

2022-01-18 Thread GitBox


comnetwork commented on a change in pull request #4039:
URL: https://github.com/apache/hbase/pull/4039#discussion_r787323310



##
File path: 
hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java
##
@@ -231,7 +231,11 @@ private void completed(Channel channel) {
   // so that the implementation will not burn up our brain as there are 
multiple state changes and
   // checks.
   private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
-if (state == State.BROKEN || state == State.CLOSED) {
+if (state == State.CLOSED) {
+  return;
+}
+if (state == State.BROKEN) {
+  failWaitingAckQueue(channel, errorSupplier);

Review comment:
   @Apache9, thank you for the suggestion for the test. Assuming dn2 and 
dn3 are slow DNs, the simple way I can think  to simulate DNs is to discard the 
message when flush the dn2 and dn3. Seems it is hard to simulate the slow 
response from dn2 and dn3, because it seems to require hack to Netty 
implementation and seems more complex, and because Netty event loop is a single 
thread, I could not block in it to hang other messages. What is your opinion?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4024: HBASE-26521 Name RPC spans as `$package.$service/$method`

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4024:
URL: https://github.com/apache/hbase/pull/4024#issuecomment-1016043994


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m 36s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 27s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 385m 21s |  root in the patch failed.  |
   |  |   | 431m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4024 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fc8f2e9b439e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/testReport/
 |
   | Max. process+thread count | 4626 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26662) User.createUserForTesting should not reset UserProvider.groups every time if hbase.group.service.for.test.only is true

2022-01-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17478319#comment-17478319
 ] 

Hudson commented on HBASE-26662:


Results for branch branch-2.4
[build #274 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/274/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/274/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/274/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/274/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> User.createUserForTesting should not reset UserProvider.groups every time if 
> hbase.group.service.for.test.only is true
> --
>
> Key: HBASE-26662
> URL: https://issues.apache.org/jira/browse/HBASE-26662
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.4.9, 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.10
>
>
> The _if check_ below will always unnecessarily reset static var 
> _UserProvider.groups_ to a newly created instance of TestingGroups every time 
> `User.createUserForTesting` is called.
> {noformat}
> if (!(UserProvider.groups instanceof TestingGroups) ||
> conf.getBoolean(TestingGroups.TEST_CONF, false)) {
>   UserProvider.groups = new TestingGroups(UserProvider.groups);
> }
> {noformat}
> For tests creating multiple {_}test users{_}, this causes the latest created 
> user to reset _groups_ and all previously created users would now have to be 
> available on the {_}User.underlyingImplementation{_}, which not always will 
> be true.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] Apache-HBase commented on pull request #3969: HBASE-26614 Refactor code related to "dump"ing ZK nodes

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #3969:
URL: https://github.com/apache/hbase/pull/3969#issuecomment-1016037283


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 16s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 45s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 150m 18s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   7m 21s |  hbase-shell in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 14s |  hbase-it in the patch passed.  |
   |  |   | 195m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3969 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2f7de8e58b45 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/testReport/
 |
   | Max. process+thread count | 3881 (vs. ulimit of 3) |
   | modules | C: hbase-zookeeper hbase-server hbase-shell hbase-it U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-26679) Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck

2022-01-18 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17478309#comment-17478309
 ] 

Lijin Bin edited comment on HBASE-26679 at 1/19/22, 2:05 AM:
-

Looks like the HBASE-26411 is just the problem.  Nice finding.


was (Author: aoxiang):
Looks like the HBASE-26411 is just the problem. 

> Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck
> -
>
> Key: HBASE-26679
> URL: https://issues.apache.org/jira/browse/HBASE-26679
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0-alpha-2, 2.4.9
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> Consider there are three dataNodes: dn1,dn2,and dn3, and we write some data 
> to {{FanOutOneBlockAsyncDFSOutput}} and then flush it, there are one 
> {{Callback}} in {{FanOutOneBlockAsyncDFSOutput.waitingAckQueue}}.  If the ack 
> from dn1 arrives firstly and triggers Netty to invoke 
> {{FanOutOneBlockAsyncDFSOutput.completed}} with dn1's channel, then in 
> {{FanOutOneBlockAsyncDFSOutput.completed}}, dn1's channel is removed from 
> {{Callback.unfinishedReplicas}}. 
> But dn2 and dn3 respond slowly, before dn2 and dn3 sending ack , dn1 is shut 
> down or have a exception, so {{FanOutOneBlockAsyncDFSOutput.failed}} is 
> triggered by Netty with dn1's channel, and because the 
> {{Callback.unfinishedReplicas}} does not contain dn1's channel,the 
> {{Callback}} is skipped in {{FanOutOneBlockAsyncDFSOutput.failed}} method, 
> just as following line250, and at line 245, 
> {{FanOutOneBlockAsyncDFSOutput.state}} is set to {{State.BROKEN}}.
> {code:java}
> 233  private synchronized void failed(Channel channel, Supplier 
> errorSupplier) {
> 234 if (state == State.BROKEN || state == State.CLOSED) {
> 235 return;
> 236  }
>  
> 244// disable further write, and fail all pending ack.
> 245state = State.BROKEN;
> 246Throwable error = errorSupplier.get();
> 247for (Iterator iter = waitingAckQueue.iterator(); 
> iter.hasNext();) {
> 248  Callback c = iter.next();
> 249  // find the first sync request which we have not acked yet and fail 
> all the request after it.
> 250  if (!c.unfinishedReplicas.contains(channel.id())) {
> 251continue;
> 252  }
> 253  for (;;) {
> 254c.future.completeExceptionally(error);
> 255if (!iter.hasNext()) {
> 256  break;
> 257}
> 258c = iter.next();
> 259  }
> 260break;
> 261}
> 262   datanodeInfoMap.keySet().forEach(ChannelOutboundInvoker::close);
> 263  }
> {code}
> At the end of above method in line 262, dn1,dn2 and dn3 are all closed, so 
> the {{FanOutOneBlockAsyncDFSOutput.failed}} is triggered again by dn2 and 
> dn3, but at the above line 234, because 
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, the 
> whole  {{FanOutOneBlockAsyncDFSOutput.failed}}  is skipped. So wait on the 
> future returned by {{FanOutOneBlockAsyncDFSOutput.flush}} would stuck for 
> ever.
> When we roll the wal, we would create a new {{FanOutOneBlockAsyncDFSOutput}} 
> and a new {{AsyncProtobufLogWriter}}, in {{AsyncProtobufLogWriter.init}} we 
> write wal header to {{FanOutOneBlockAsyncDFSOutput}} and wait it to complete. 
> If we run into this situation, the roll would stuck forever.
> I have simulate this case in the PR, and my fix is even through the  
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, we would 
> still try to trigger {{Callback.future}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26679) Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck

2022-01-18 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17478309#comment-17478309
 ] 

Lijin Bin commented on HBASE-26679:
---

Looks like the HBASE-26411 is just the problem. 

> Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck
> -
>
> Key: HBASE-26679
> URL: https://issues.apache.org/jira/browse/HBASE-26679
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0-alpha-2, 2.4.9
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> Consider there are three dataNodes: dn1,dn2,and dn3, and we write some data 
> to {{FanOutOneBlockAsyncDFSOutput}} and then flush it, there are one 
> {{Callback}} in {{FanOutOneBlockAsyncDFSOutput.waitingAckQueue}}.  If the ack 
> from dn1 arrives firstly and triggers Netty to invoke 
> {{FanOutOneBlockAsyncDFSOutput.completed}} with dn1's channel, then in 
> {{FanOutOneBlockAsyncDFSOutput.completed}}, dn1's channel is removed from 
> {{Callback.unfinishedReplicas}}. 
> But dn2 and dn3 respond slowly, before dn2 and dn3 sending ack , dn1 is shut 
> down or have a exception, so {{FanOutOneBlockAsyncDFSOutput.failed}} is 
> triggered by Netty with dn1's channel, and because the 
> {{Callback.unfinishedReplicas}} does not contain dn1's channel,the 
> {{Callback}} is skipped in {{FanOutOneBlockAsyncDFSOutput.failed}} method, 
> just as following line250, and at line 245, 
> {{FanOutOneBlockAsyncDFSOutput.state}} is set to {{State.BROKEN}}.
> {code:java}
> 233  private synchronized void failed(Channel channel, Supplier 
> errorSupplier) {
> 234 if (state == State.BROKEN || state == State.CLOSED) {
> 235 return;
> 236  }
>  
> 244// disable further write, and fail all pending ack.
> 245state = State.BROKEN;
> 246Throwable error = errorSupplier.get();
> 247for (Iterator iter = waitingAckQueue.iterator(); 
> iter.hasNext();) {
> 248  Callback c = iter.next();
> 249  // find the first sync request which we have not acked yet and fail 
> all the request after it.
> 250  if (!c.unfinishedReplicas.contains(channel.id())) {
> 251continue;
> 252  }
> 253  for (;;) {
> 254c.future.completeExceptionally(error);
> 255if (!iter.hasNext()) {
> 256  break;
> 257}
> 258c = iter.next();
> 259  }
> 260break;
> 261}
> 262   datanodeInfoMap.keySet().forEach(ChannelOutboundInvoker::close);
> 263  }
> {code}
> At the end of above method in line 262, dn1,dn2 and dn3 are all closed, so 
> the {{FanOutOneBlockAsyncDFSOutput.failed}} is triggered again by dn2 and 
> dn3, but at the above line 234, because 
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, the 
> whole  {{FanOutOneBlockAsyncDFSOutput.failed}}  is skipped. So wait on the 
> future returned by {{FanOutOneBlockAsyncDFSOutput.flush}} would stuck for 
> ever.
> When we roll the wal, we would create a new {{FanOutOneBlockAsyncDFSOutput}} 
> and a new {{AsyncProtobufLogWriter}}, in {{AsyncProtobufLogWriter.init}} we 
> write wal header to {{FanOutOneBlockAsyncDFSOutput}} and wait it to complete. 
> If we run into this situation, the roll would stuck forever.
> I have simulate this case in the PR, and my fix is even through the  
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, we would 
> still try to trigger {{Callback.future}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] Apache-HBase commented on pull request #3969: HBASE-26614 Refactor code related to "dump"ing ZK nodes

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #3969:
URL: https://github.com/apache/hbase/pull/3969#issuecomment-1015973401


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 54s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 46s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 31s |  hbase-zookeeper generated 7 new + 87 
unchanged - 6 fixed = 94 total (was 93)  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  rubocop  |   0m  7s |  There were no new rubocop 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 26s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  58m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3969 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile rubocop |
   | uname | Linux e9abfca6ea65 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/artifact/yetus-general-check/output/diff-compile-javac-hbase-zookeeper.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-zookeeper hbase-server hbase-shell hbase-it U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3969/8/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4024: HBASE-26521 Name RPC spans as `$package.$service/$method`

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4024:
URL: https://github.com/apache/hbase/pull/4024#issuecomment-1015972539


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 57s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  2s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 35s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m 44s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 229m 25s |  root in the patch failed.  |
   |  |   | 279m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4024 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1da755c8313a 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/testReport/
 |
   | Max. process+thread count | 3340 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015966391


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 28s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 34s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 20s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 8495aa38a975 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 86 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015956110


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 19s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 20s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 51s |  hbase-client in the patch passed.  
|
   |  |   |  34m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a3050a03b127 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/testReport/
 |
   | Max. process+thread count | 362 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015954184


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 32s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 27s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 49s |  hbase-client in the patch passed.  
|
   |  |   |  31m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0770705d83ce 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/testReport/
 |
   | Max. process+thread count | 353 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (HBASE-26474) Implement connection-level attributes

2022-01-18 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reopened HBASE-26474:
--

Reopening as there's still an addendum PR outstanding.

> Implement connection-level attributes
> -
>
> Key: HBASE-26474
> URL: https://issues.apache.org/jira/browse/HBASE-26474
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3
>
>
> Add support for `db.system`, `db.connection_string`, `db.user`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26474) Implement connection-level attributes

2022-01-18 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-26474:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement connection-level attributes
> -
>
> Key: HBASE-26474
> URL: https://issues.apache.org/jira/browse/HBASE-26474
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3
>
>
> Add support for `db.system`, `db.connection_string`, `db.user`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26474) Implement connection-level attributes

2022-01-18 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-26474:
-
Fix Version/s: 2.5.0

> Implement connection-level attributes
> -
>
> Key: HBASE-26474
> URL: https://issues.apache.org/jira/browse/HBASE-26474
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3
>
>
> Add support for `db.system`, `db.connection_string`, `db.user`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] ndimiduk merged pull request #4045: Backport "HBASE-26474 Implement connection-level attributes (#4014)" to branch-2.5

2022-01-18 Thread GitBox


ndimiduk merged pull request #4045:
URL: https://github.com/apache/hbase/pull/4045


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4044: HBASE-26649 Support meta replica LoadBalance mode for RegionLocator#g…

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4044:
URL: https://github.com/apache/hbase/pull/4044#issuecomment-1015943927


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 17s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 20s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 41s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 43s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 202m  3s |  hbase-server in the patch failed.  |
   |  |   | 247m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4044 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ec21e9f45497 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/testReport/
 |
   | Max. process+thread count | 3206 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4045: Backport "HBASE-26474 Implement connection-level attributes (#4014)" to branch-2.5

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4045:
URL: https://github.com/apache/hbase/pull/4045#issuecomment-1015938875


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   1m 50s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 27s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 37s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 54s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 159m 14s |  hbase-server in the patch passed.  
|
   |  |   | 194m 42s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4045 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a442544e93f0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 5d14589314 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/testReport/
 |
   | Max. process+thread count | 3647 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4045: Backport "HBASE-26474 Implement connection-level attributes (#4014)" to branch-2.5

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4045:
URL: https://github.com/apache/hbase/pull/4045#issuecomment-1015935161


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 47s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 31s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   2m  7s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 20s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 21s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 41s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 34s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 144m 10s |  hbase-server in the patch passed.  
|
   |  |   | 188m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4045 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a2f113e727c0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 5d14589314 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/testReport/
 |
   | Max. process+thread count | 3844 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4044: HBASE-26649 Support meta replica LoadBalance mode for RegionLocator#g…

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4044:
URL: https://github.com/apache/hbase/pull/4044#issuecomment-1015912204


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  1s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 14s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 32s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 150m 24s |  hbase-server in the patch passed.  
|
   |  |   | 189m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4044 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5a1eb0ec4729 4.15.0-161-generic #169-Ubuntu SMP Fri Oct 15 
13:41:54 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/testReport/
 |
   | Max. process+thread count | 4622 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4024: HBASE-26521 Name RPC spans as `$package.$service/$method`

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4024:
URL: https://github.com/apache/hbase/pull/4024#issuecomment-1015861904


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  master passed  |
   | +1 :green_heart: |  compile  |   9m  0s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m  0s |  master passed  |
   | +1 :green_heart: |  spotbugs  |  13m 58s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 57s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 41s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |  14m 43s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  89m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4024 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile xml |
   | uname | Linux 33535812e2e3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 141 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4024/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4045: Backport "HBASE-26474 Implement connection-level attributes (#4014)" to branch-2.5

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4045:
URL: https://github.com/apache/hbase/pull/4045#issuecomment-1015856971


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  branch-2.5 passed  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  branch-2.5 passed  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m 10s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  The patch passed checkstyle 
in hbase-common  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  hbase-client: The patch 
generated 0 new + 36 unchanged - 1 fixed = 36 total (was 37)  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  The patch passed checkstyle 
in hbase-server  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 20s |  Patch does not cause any 
errors with Hadoop 2.10.0 or 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   4m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4045 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 8864f5be2cc3 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 5d14589314 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4045/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015834370


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 20s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   1m 18s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 32s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javac  |   0m 32s |  hbase-client in the patch failed.  |
   | -1 :x: |  shadedjars  |   3m 38s |  patch has 16 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  |   0m 32s |  hbase-client in the patch failed.  |
   |  |   |  25m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 33eeb8916f65 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-client.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/testReport/
 |
   | Max. process+thread count | 349 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015832077


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 51s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   1m  6s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 29s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javac  |   0m 29s |  hbase-client in the patch failed.  |
   | -1 :x: |  shadedjars  |   3m 15s |  patch has 16 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 27s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  |   0m 29s |  hbase-client in the patch failed.  |
   |  |   |  22m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c87bdd91b0b0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/patch-compile-hbase-client.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/patch-compile-hbase-client.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/patch-shadedjars.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-client.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/testReport/
 |
   | Max. process+thread count | 409 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4046:
URL: https://github.com/apache/hbase/pull/4046#issuecomment-1015830980


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 52s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 58s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   1m  7s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 54s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javac  |   0m 54s |  hbase-client in the patch failed.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  hadoopcheck  |   1m 33s |  The patch causes 16 errors with 
Hadoop v3.1.2.  |
   | -1 :x: |  hadoopcheck  |   3m  7s |  The patch causes 16 errors with 
Hadoop v3.2.1.  |
   | -1 :x: |  spotbugs  |   0m 17s |  hbase-client in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 19s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  21m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4046 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 5625bea71b02 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / d4f2b66a43 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-compile-hbase-client.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-compile-hbase-client.txt
 |
   | hadoopcheck | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-javac-3.1.2.txt
 |
   | hadoopcheck | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-javac-3.2.1.txt
 |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/artifact/yetus-general-check/output/patch-spotbugs-hbase-client.txt
 |
   | Max. process+thread count | 86 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4046/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3347: HBASE-25955 Setting NAMESPACES when adding a replication peer doesn't…

2022-01-18 Thread GitBox


wchevreuil commented on a change in pull request #3347:
URL: https://github.com/apache/hbase/pull/3347#discussion_r787146467



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/BaseReplicationEndpoint.java
##
@@ -69,13 +68,14 @@ public void peerConfigUpdated(ReplicationPeerConfig rpc){
   @Override
   public WALEntryFilter getWALEntryfilter() {
 ArrayList filters = Lists.newArrayList();
-WALEntryFilter scopeFilter = getScopeWALEntryFilter();
-if (scopeFilter != null) {
-  filters.add(scopeFilter);
-}
 WALEntryFilter tableCfFilter = getNamespaceTableCfWALEntryFilter();
 if (tableCfFilter != null) {
   filters.add(tableCfFilter);
+} else {

Review comment:
   Maybe come up with an extra, namespace only filter, then? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4044: HBASE-26649 Support meta replica LoadBalance mode for RegionLocator#g…

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4044:
URL: https://github.com/apache/hbase/pull/4044#issuecomment-1015829633


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 28s |  master passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   4m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m 11s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 23s |  hbase-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 44s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |   4m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4044 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux bbb8354af552 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4044/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk opened a new pull request #4046: Backport "HBASE-26520 Remove use of `db.hbase.namespance` tracing attribute (#4015)" to branch-2

2022-01-18 Thread GitBox


ndimiduk opened a new pull request #4046:
URL: https://github.com/apache/hbase/pull/4046


   The HBase-specific attribute `db.hbase.namespace` has been deprecated in 
favor of the generic
   `db.name`. See also 
https://github.com/open-telemetry/opentelemetry-specification/issues/1760
   
   Signed-off-by: Duo Zhang 
   Signed-off-by: Tak Lon (Stephen) Wu 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on pull request #4025: HBASE-26474 Implement connection-level attributes (addendum)

2022-01-18 Thread GitBox


ndimiduk commented on pull request #4025:
URL: https://github.com/apache/hbase/pull/4025#issuecomment-1015812566


   Ping @joshelser was there anything else you wanted here? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk opened a new pull request #4045: Backport "HBASE-26474 Implement connection-level attributes (#4014)" to branch-2.5

2022-01-18 Thread GitBox


ndimiduk opened a new pull request #4045:
URL: https://github.com/apache/hbase/pull/4045


   Add support for `db.system`, `db.connection_string`, `db.user`.
   
   Signed-off-by: Duo Zhang 
   Signed-off-by: Huaxiang Sun 
   Co-authored-by: Josh Elser 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26474) Implement connection-level attributes

2022-01-18 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-26474:
-
Fix Version/s: 2.6.0

> Implement connection-level attributes
> -
>
> Key: HBASE-26474
> URL: https://issues.apache.org/jira/browse/HBASE-26474
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Add support for `db.system`, `db.connection_string`, `db.user`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] ndimiduk merged pull request #4014: Backport "HBASE-26474 Implement connection-level attributes" to branch-2

2022-01-18 Thread GitBox


ndimiduk merged pull request #4014:
URL: https://github.com/apache/hbase/pull/4014


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26640) Reimplement master location region initialization to better work with SFT

2022-01-18 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17478165#comment-17478165
 ] 

Wellington Chevreuil commented on HBASE-26640:
--

Can't we just change it to write straight into the actual table dir? We always 
delete any pre-existing table dir in CreateTableProcedure, so there wouldn't be 
a problem of left overs after failed creations. 

> Reimplement master location region initialization to better work with SFT
> -
>
> Key: HBASE-26640
> URL: https://issues.apache.org/jira/browse/HBASE-26640
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, RegionProcedureStore
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> It is not like a normal region where we have a TableDescriptor so it can 
> store the SFT implementation of its own. In the current implementation, if we 
> change the global SFT configuration, the SFT implementation of the master 
> local reigon will be changed and cause data loss.
> First I think we could hard coded it to use DefaultSFT. The region is small 
> and will not cause too much performance impact. Then we could find a way to 
> manage the SFT implementation of it.
> == Update ==
> The initialization of master local region depends on renaming, which can not 
> work well on OSS. So we should also change it. The basic idea is to touch a 
> '.initialized' file to indicate it is initialized. Need to consider how to 
> migrate from the existing master local region where it does not have this 
> file.
> And we could also store the TableDescriptor on file system, so we can 
> determine whether this is a SFT change. If so, we should do the migration 
> before actually opening the master local region.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] huaxiangsun opened a new pull request #4044: HBASE-26649 Support meta replica LoadBalance mode for RegionLocator#g…

2022-01-18 Thread GitBox


huaxiangsun opened a new pull request #4044:
URL: https://github.com/apache/hbase/pull/4044


   …etAllRegionLocations()


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4042: HBASE-26660 delayed FlushRegionEntry should be removed when we need a non-delayed one

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4042:
URL: https://github.com/apache/hbase/pull/4042#issuecomment-1015779045


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 17s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 48s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 35s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   5m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 130m 42s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 184m 39s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.replication.TestReplicationSource |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4042 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux c6136c1c91b4 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-agent/workspace/Base-PreCommit-GitHub-PR_PR-4042/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / a3e7d36f2e |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/2/testReport/
 |
   | Max. process+thread count | 4351 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/2/console
 |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [hbase] ndimiduk commented on a change in pull request #4024: HBASE-26521 Name RPC spans as `$package.$service/$method`

2022-01-18 Thread GitBox


ndimiduk commented on a change in pull request #4024:
URL: https://github.com/apache/hbase/pull/4024#discussion_r787076612



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/IpcClientSpanBuilder.java
##
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NET_PEER_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NET_PEER_PORT;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_METHOD;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_SERVICE;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_SYSTEM;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.client.AsyncConnectionImpl;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RpcSystem;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.protobuf.Descriptors;
+
+/**
+ * Construct {@link Span} instances originating from the client side of an IPC.
+ */
+@InterfaceAudience.Private
+public class IpcClientSpanBuilder implements Supplier {
+
+  private String name;
+  private final Map, Object> attributes = new HashMap<>();
+
+  public IpcClientSpanBuilder(
+final Supplier connectionStringSupplier,
+final Supplier userSupplier
+  ) {
+// TODO: this constructor is a hack used by AbstractRpcClient because it 
does not have access

Review comment:
   Hmm yes I think you're correct, we don't need this and can simply.

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/IpcClientSpanBuilder.java
##
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client.trace;
+
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NET_PEER_NAME;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NET_PEER_PORT;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_METHOD;
+import static 
org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_SERVICE;
+import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RPC_SYSTEM;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.SpanBuilder;
+import io.opentelemetry.api.trace.SpanKind;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Supplier;
+import org.apache.hadoop.hbase.client.AsyncConnectionImpl;
+import org.apache.hadoop.hbase.net.Address;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.RpcSystem;
+import org.apache.hadoop.hbase.trace.TraceUtil;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.protobuf.Descriptors;
+
+/**
+ * Construct {@link Span} instances originating from the client side of an IPC.
+ */

[GitHub] [hbase] joshelser commented on pull request #4040: HBASE-26674 Should modify filesCompacting under storeWriteLock

2022-01-18 Thread GitBox


joshelser commented on pull request #4040:
URL: https://github.com/apache/hbase/pull/4040#issuecomment-1015726100


   > Let me run this locally too, but I suspect you've addressed the root of 
the problem
   
   My `hbase pe --nomapred --rows=100 --presplit=30 randomWrite 16` 
completed without any compaction-related exceptions in the log. Ship it  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4043: HBASE-26681 Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4043:
URL: https://github.com/apache/hbase/pull/4043#issuecomment-1015646940


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   7m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 28s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 51s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 57s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 38s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 17s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 52s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 44s |  hbase-server: The patch generated 1 
new + 16 unchanged - 0 fixed = 17 total (was 16)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 20s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   8m  2s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | -1 :x: |  findbugs  |   3m 16s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 157m 15s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 217m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Integral division result cast to double or float in new 
org.apache.hadoop.hbase.io.hfile.bucket.BufferedBucketCache(String, long, int, 
int[], int, int, String, int, Configuration)  At 
BufferedBucketCache.java:double or float in new 
org.apache.hadoop.hbase.io.hfile.bucket.BufferedBucketCache(String, long, int, 
int[], int, int, String, int, Configuration)  At BufferedBucketCache.java:[line 
68] |
   | Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
   |   | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4043/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4043 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 3c8b22b2c05c 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-4043/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / a3e7d36f2e |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | checkstyle | 

[GitHub] [hbase] Apache-HBase commented on pull request #4040: HBASE-26674 Should modify filesCompacting under storeWriteLock

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4040:
URL: https://github.com/apache/hbase/pull/4040#issuecomment-1015632812


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   7m 55s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 214m  9s |  hbase-server in the patch passed.  
|
   |  |   | 260m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4040/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4040 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8eae4f703278 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4040/1/testReport/
 |
   | Max. process+thread count | 3239 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4040/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] busbey commented on pull request #4041: Update pom.xml

2022-01-18 Thread GitBox


busbey commented on pull request #4041:
URL: https://github.com/apache/hbase/pull/4041#issuecomment-1015627124


   I agree with Duo's evaluation that this wasn't a typo. I also think the 
current phrasing in confusing.
   
   @acgoliyan would you mind rephrasing this as something like "we do not want 
invocations of 'assembly:single' to do anything in this module."?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on pull request #939: HBASE-23349 : Config based Scanner reset after compaction if low refCount is preventing archival of compacted away store files

2022-01-18 Thread GitBox


virajjasani commented on pull request #939:
URL: https://github.com/apache/hbase/pull/939#issuecomment-1015588954


   @Apache9 It's been really long time, I doubt if this work might get more 
attention at this point. We have also not seen this issue after fixing some 
coprocs. I can close this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4042: HBASE-26660 delayed FlushRegionEntry should be removed when we need a non-delayed one

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4042:
URL: https://github.com/apache/hbase/pull/4042#issuecomment-1015574344


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   5m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 14s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 48s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 37s |  hbase-server: The patch generated 1 
new + 316 unchanged - 0 fixed = 317 total (was 316)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   5m  5s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   2m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 130m 34s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 179m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4042 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux f9988ab2d375 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-agent/workspace/Base-PreCommit-GitHub-PR_PR-4042/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / a3e7d36f2e |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/1/testReport/
 |
   | Max. process+thread count | 4260 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4042/1/console
 |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about 

[jira] [Commented] (HBASE-26579) Set storage policy of recovered edits when wal storage type is configured

2022-01-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477995#comment-17477995
 ] 

Hudson commented on HBASE-26579:


Results for branch branch-1
[build #196 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Set storage policy of recovered edits  when wal storage type is configured
> --
>
> Key: HBASE-26579
> URL: https://issues.apache.org/jira/browse/HBASE-26579
> Project: HBase
>  Issue Type: Improvement
>  Components: Recovery
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.10
>
>
> In our cluster, we has many SSD and a little HDD.  (Most table configured 
> storage policy ONE_SSD, and all wals is configured ALL_SSD)
> when all cluster down, It's difficult to recovery cluster. Because HDD Disk 
> IO bottleneck (Almost all disk io util is 100%).
> I think the most hdfs operation when recovery is split wal to recovered edits 
> dir, And read it.
> And it goes better when i stop hbase and set all recovered.edits to ALL_SSD.
> So we can get benifit of recovery time if we set recovered.edits dir to 
> better storage like WAL.
> Now i reuse config item  hbase.wal.storage.policy to set recovered.edits 
> storage type. Because I did not find a scenario where they use different 
> storage Policy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26678) Backport HBASE-26579 to branch-1

2022-01-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477994#comment-17477994
 ] 

Hudson commented on HBASE-26678:


Results for branch branch-1
[build #196 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/196//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Backport HBASE-26579 to branch-1
> 
>
> Key: HBASE-26678
> URL: https://issues.apache.org/jira/browse/HBASE-26678
> Project: HBase
>  Issue Type: Task
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> Our branch-1 cluster also met the storage policy problem in usage. Backport 
> the path to branch-1.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

The operation distribution of YCSB workload is latest.

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.

As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
memory allocation, the RAMBuffer may not have GC improvement as much as branch-1


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.

As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
memory allocation, the RAMBuffer may not have GC improvement as much as branch-1



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> The operation distribution of YCSB workload is latest.
> Client Side Metrics
> See the attachment ClientSideMetrics.png
> Server Side GC:
> The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
> With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.
> As the master & branch-2 using 

[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.

As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
memory allocation, the RAMBuffer may not have GC improvement as much as branch-1


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.





> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> Client Side Metrics
> See the attachment ClientSideMetrics.png
> Server Side GC:
> The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
> With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.
> As the master & branch-2 using ByteBufferAllocator to manage the bucketcache 
> memory allocation, the RAMBuffer may not have GC improvement as much as 
> branch-1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png

Server Side GC:
The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.




  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png





> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> Client Side Metrics
> See the attachment ClientSideMetrics.png
> Server Side GC:
> The current bucket cache triggered 217 GCs, which costs 2.74 minutes in total.
> With RAMBuffer, the server side had 210 times GC and 2.56 minutes in total.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 

The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
See the attachment ClientSideMetrics.png




  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
!ClientSideMetrics.png|height=300|width=300!





> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> Client Side Metrics
> See the attachment ClientSideMetrics.png



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
!ClientSideMetrics.png|height=250|width=250!




  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics
!ClientSideMetrics.png|height=250|width=250!

{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> Client Side Metrics
> !ClientSideMetrics.png|height=250|width=250!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
!ClientSideMetrics.png|height=300|width=300!




  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

Client Side Metrics
!ClientSideMetrics.png|height=250|width=250!





> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> Client Side Metrics
> !ClientSideMetrics.png|height=300|width=300!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics
!ClientSideMetrics.png|height=250|width=250!

{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics


{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> {panel:title=YCSB Test}
> Client Side Metrics
> !ClientSideMetrics.png|height=250|width=250!
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Attachment: ClientSideMetrics.png

> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: ClientSideMetrics.png, Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Target table size: 112 GB
> Properties:
> !Properties.png|height=250|width=250!
> {panel:title=YCSB Test}
> Client Side Metrics
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics


{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics

||BucketCache without RAMBuffer||BucketCache with RAMBuffer||
|[OVERALL], RunTime(ms), 1772005
[OVERALL], Throughput(ops/sec), 2821.6624670923616
[TOTAL_GCS_PS_Scavenge], Count, 2760
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17357
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.9795119088264423
[TOTAL_GCS_PS_MarkSweep], Count, 4
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 217
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.012246015107180848
[TOTAL_GCs], Count, 2764
[TOTAL_GC_TIME], Time(ms), 17574
[TOTAL_GC_TIME_%], Time(%), 0.9917579239336233
[READ], Operations, 251
[READ], AverageLatency(us), 6831.8289292684285
[READ], MinLatency(us), 175
[READ], MaxLatency(us), 226431
[READ], 95thPercentileLatency(us), 12863
[READ], 99thPercentileLatency(us), 17823
[READ], Return=OK, 251
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 961.1
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 56191
[CLEANUP], 95thPercentileLatency(us), 73
[CLEANUP], 99thPercentileLatency(us), 541
[SCAN], Operations, 249
[SCAN], AverageLatency(us), 14388.572877029152
[SCAN], MinLatency(us), 320
[SCAN], MaxLatency(us), 441343
[SCAN], 95thPercentileLatency(us), 24751
[SCAN], 99thPercentileLatency(us), 32287
[SCAN], Return=OK, 249 |
|[OVERALL], RunTime(ms), 1699253
[OVERALL], Throughput(ops/sec), 2942.4694262714265
[TOTAL_GCS_PS_Scavenge], Count, 2714
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17158
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 1.0097378083193025
[TOTAL_GCS_PS_MarkSweep], Count, 3
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 172
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.010122094826373705
[TOTAL_GCs], Count, 2717
[TOTAL_GC_TIME], Time(ms), 17330
[TOTAL_GC_TIME_%], Time(%), 1.0198599031456763
[READ], Operations, 2499189
[READ], AverageLatency(us), 6507.363253039286
[READ], MinLatency(us), 177
[READ], MaxLatency(us), 102783
[READ], 95thPercentileLatency(us), 12055
[READ], 99thPercentileLatency(us), 16431
[READ], Return=OK, 2499189
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 1247.81666
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 73471
[CLEANUP], 95thPercentileLatency(us), 72
[CLEANUP], 99thPercentileLatency(us), 605
[SCAN], Operations, 2500811
[SCAN], AverageLatency(us), 13850.626164872116
[SCAN], MinLatency(us), 297
[SCAN], MaxLatency(us), 368383
[SCAN], 95thPercentileLatency(us), 23791
[SCAN], 99thPercentileLatency(us), 30783
[SCAN], Return=OK, 2500811
|

{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: 

[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics

||BucketCache without RAMBuffer||BucketCache with RAMBuffer||
|[OVERALL], RunTime(ms), 1772005
[OVERALL], Throughput(ops/sec), 2821.6624670923616
[TOTAL_GCS_PS_Scavenge], Count, 2760
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17357
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.9795119088264423
[TOTAL_GCS_PS_MarkSweep], Count, 4
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 217
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.012246015107180848
[TOTAL_GCs], Count, 2764
[TOTAL_GC_TIME], Time(ms), 17574
[TOTAL_GC_TIME_%], Time(%), 0.9917579239336233
[READ], Operations, 251
[READ], AverageLatency(us), 6831.8289292684285
[READ], MinLatency(us), 175
[READ], MaxLatency(us), 226431
[READ], 95thPercentileLatency(us), 12863
[READ], 99thPercentileLatency(us), 17823
[READ], Return=OK, 251
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 961.1
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 56191
[CLEANUP], 95thPercentileLatency(us), 73
[CLEANUP], 99thPercentileLatency(us), 541
[SCAN], Operations, 249
[SCAN], AverageLatency(us), 14388.572877029152
[SCAN], MinLatency(us), 320
[SCAN], MaxLatency(us), 441343
[SCAN], 95thPercentileLatency(us), 24751
[SCAN], 99thPercentileLatency(us), 32287
[SCAN], Return=OK, 249 |
|[OVERALL], RunTime(ms), 1699253
[OVERALL], Throughput(ops/sec), 2942.4694262714265
[TOTAL_GCS_PS_Scavenge], Count, 2714
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17158
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 1.0097378083193025
[TOTAL_GCS_PS_MarkSweep], Count, 3
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 172
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.010122094826373705
[TOTAL_GCs], Count, 2717
[TOTAL_GC_TIME], Time(ms), 17330
[TOTAL_GC_TIME_%], Time(%), 1.0198599031456763
[READ], Operations, 2499189
[READ], AverageLatency(us), 6507.363253039286
[READ], MinLatency(us), 177
[READ], MaxLatency(us), 102783
[READ], 95thPercentileLatency(us), 12055
[READ], 99thPercentileLatency(us), 16431
[READ], Return=OK, 2499189
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 1247.81666
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 73471
[CLEANUP], 95thPercentileLatency(us), 72
[CLEANUP], 99thPercentileLatency(us), 605
[SCAN], Operations, 2500811
[SCAN], AverageLatency(us), 13850.626164872116
[SCAN], MinLatency(us), 297
[SCAN], MaxLatency(us), 368383
[SCAN], 95thPercentileLatency(us), 23791
[SCAN], 99thPercentileLatency(us), 30783
[SCAN], Return=OK, 2500811
|

{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics

||BucketCache without RAMBuffer||BucketCache with RAMBuffer||
|[OVERALL], RunTime(ms), 1772005
[OVERALL], Throughput(ops/sec), 2821.6624670923616
[TOTAL_GCS_PS_Scavenge], Count, 2760
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17357
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.9795119088264423

[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Target table size: 112 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Client Side Metrics

||BucketCache without RAMBuffer||BucketCache with RAMBuffer||
|[OVERALL], RunTime(ms), 1772005
[OVERALL], Throughput(ops/sec), 2821.6624670923616
[TOTAL_GCS_PS_Scavenge], Count, 2760
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17357
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.9795119088264423
[TOTAL_GCS_PS_MarkSweep], Count, 4
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 217
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.012246015107180848
[TOTAL_GCs], Count, 2764
[TOTAL_GC_TIME], Time(ms), 17574
[TOTAL_GC_TIME_%], Time(%), 0.9917579239336233
[READ], Operations, 251
[READ], AverageLatency(us), 6831.8289292684285
[READ], MinLatency(us), 175
[READ], MaxLatency(us), 226431
[READ], 95thPercentileLatency(us), 12863
[READ], 99thPercentileLatency(us), 17823
[READ], Return=OK, 251
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 961.1
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 56191
[CLEANUP], 95thPercentileLatency(us), 73
[CLEANUP], 99thPercentileLatency(us), 541
[SCAN], Operations, 249
[SCAN], AverageLatency(us), 14388.572877029152
[SCAN], MinLatency(us), 320
[SCAN], MaxLatency(us), 441343
[SCAN], 95thPercentileLatency(us), 24751
[SCAN], 99thPercentileLatency(us), 32287
[SCAN], Return=OK, 249
|[OVERALL], RunTime(ms), 1699253
[OVERALL], Throughput(ops/sec), 2942.4694262714265
[TOTAL_GCS_PS_Scavenge], Count, 2714
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 17158
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 1.0097378083193025
[TOTAL_GCS_PS_MarkSweep], Count, 3
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 172
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.010122094826373705
[TOTAL_GCs], Count, 2717
[TOTAL_GC_TIME], Time(ms), 17330
[TOTAL_GC_TIME_%], Time(%), 1.0198599031456763
[READ], Operations, 2499189
[READ], AverageLatency(us), 6507.363253039286
[READ], MinLatency(us), 177
[READ], MaxLatency(us), 102783
[READ], 95thPercentileLatency(us), 12055
[READ], 99thPercentileLatency(us), 16431
[READ], Return=OK, 2499189
[CLEANUP], Operations, 60
[CLEANUP], AverageLatency(us), 1247.81666
[CLEANUP], MinLatency(us), 2
[CLEANUP], MaxLatency(us), 73471
[CLEANUP], 95thPercentileLatency(us), 72
[CLEANUP], 99thPercentileLatency(us), 605
[SCAN], Operations, 2500811
[SCAN], AverageLatency(us), 13850.626164872116
[SCAN], MinLatency(us), 297
[SCAN], MaxLatency(us), 368383
[SCAN], 95thPercentileLatency(us), 23791
[SCAN], 99thPercentileLatency(us), 30783
[SCAN], Return=OK, 2500811
|

{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: 

[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png|height=250|width=250!

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Properties:
> !Properties.png|height=250|width=250!
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=100|width=100!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png![size:50%]
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=100|width=100!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Properties:
> !Properties.png!
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=250|width=250!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png|height=100|width=100!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png|height=250|width=250!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Properties:
> !Properties.png!
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26551) Add FastPath feature to HBase RWQueueRpcExecutor

2022-01-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477916#comment-17477916
 ] 

Hudson commented on HBASE-26551:


Results for branch branch-2.5
[build #29 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.5/29/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.5/29/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.5/29/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.5/29/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.5/29/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add FastPath feature to HBase RWQueueRpcExecutor
> 
>
> Key: HBASE-26551
> URL: https://issues.apache.org/jira/browse/HBASE-26551
> Project: HBase
>  Issue Type: Task
>  Components: rpc, Scheduler
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 2.5.0, 1.7.2, 2.6.0, 3.0.0-alpha-3
>
> Attachments: QueueTimeComparison.png, QueueTimeComparisonWithMax.png
>
>
> In ticket [HBASE-17808|https://issues.apache.org/jira/browse/HBASE-17808], 
> the author introduced a fastpath implementation for RWQueueRpcExecutor. It 
> aggregated 3 different independent RpcExecutor to implement the mechanism. 
> This redundancy costed more memory and from its own performance test, it 
> cannot outperform the original implementation. This time, I directly extended 
> RWQueueRpcExecutor to implement the fast path mechanism. From my test result, 
> it has a better queue time performance than before.
> YCSB Test:
> Constant Configurations:
> hbase.regionserver.handler.count: 1000
> hbase.ipc.server.callqueue.read.ratio: 0.5
> hbase.ipc.server.callqueue.handler.factor: 0.2
> Test Workload:
> YCSB: 50% Write, 25% Get, 25% Scan. Max Scan length: 1000.
> Client Threads: 100
> ||FastPathRWQueueRpcExecutor||RWQueueRpcExecutor||
> |[OVERALL], RunTime(ms), 909365
> [OVERALL], Throughput(ops/sec), 5498.3422498116815
> [TOTAL_GCS_PS_Scavenge], Count, 1208
> [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 8006
> [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.8803945610398465
> [TOTAL_GCS_PS_MarkSweep], Count, 2
> [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 96
> [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.010556817119638429
> [TOTAL_GCs], Count, 1210
> [TOTAL_GC_TIME], Time(ms), 8102
> [TOTAL_GC_TIME_%], Time(%), 0.8909513781594849
> [READ], Operations, 1248885
> [READ], AverageLatency(us), 14080.154160711354
> [READ], MinLatency(us), 269
> [READ], MaxLatency(us), 180735
> [READ], 95thPercentileLatency(us), 29775
> [READ], 99thPercentileLatency(us), 39391
> [READ], Return=OK, 1248885
> [CLEANUP], Operations, 200
> [CLEANUP], AverageLatency(us), 311.78
> [CLEANUP], MinLatency(us), 1
> [CLEANUP], MaxLatency(us), 59647
> [CLEANUP], 95thPercentileLatency(us), 26
> [CLEANUP], 99thPercentileLatency(us), 173
> [INSERT], Operations, 1251067
> [INSERT], AverageLatency(us), 14235.898240461942
> [INSERT], MinLatency(us), 393
> [INSERT], MaxLatency(us), 204159
> [INSERT], 95thPercentileLatency(us), 29919
> [INSERT], 99thPercentileLatency(us), 39647
> [INSERT], Return=OK, 1251067
> [UPDATE], Operations, 1249582
> [UPDATE], AverageLatency(us), 14166.923049467741
> [UPDATE], MinLatency(us), 321
> [UPDATE], MaxLatency(us), 203647
> [UPDATE], 95thPercentileLatency(us), 29855
> [UPDATE], 99thPercentileLatency(us), 39551
> [UPDATE], Return=OK, 1249582
> [SCAN], Operations, 1250466
> [SCAN], AverageLatency(us), 30056.68854251135
> [SCAN], MinLatency(us), 787
> [SCAN], MaxLatency(us), 509183
> [SCAN], 95thPercentileLatency(us), 57823
> [SCAN], 99thPercentileLatency(us), 74751
> [SCAN], Return=OK, 1250466|[OVERALL], RunTime(ms), 958763
> [OVERALL], Throughput(ops/sec), 5215.053146606617
> [TOTAL_GCS_PS_Scavenge], Count, 1264
> [TOTAL_GC_TIME_PS_Scavenge], Time(ms), 8680
> [TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.9053332262509086
> [TOTAL_GCS_PS_MarkSweep], Count, 1
> [TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 38
> [TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 

[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png![size:50%]
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png![size:50%]
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Properties:
> !Properties.png!
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB
Properties:
!Properties.png!

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB

{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> Properties:
> !Properties.png!
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:
Size of BucketCache: 40 GB

{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:


{panel:title=YCSB Test}
Some text with a title
{panel}




> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Attachment: Properties.png

> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png, Properties.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> Size of BucketCache: 40 GB
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}

I also did a YCSB performance test. 
The circumstance is:


{panel:title=YCSB Test}
Some text with a title
{panel}



  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

 
{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}
> I also did a YCSB performance test. 
> The circumstance is:
> {panel:title=YCSB Test}
> Some text with a title
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

 
{panel:title=The performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

 
{panel:title=the performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
>  
> {panel:title=The performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

 
{panel:title=the performance of RAMBuffer with its hit ratio is 100%}
!Hit 100%.png!
{panel}


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 
{panel:title=My title}
!Hit 100%.png!
{panel}



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
>  
> {panel:title=the performance of RAMBuffer with its hit ratio is 100%}
> !Hit 100%.png!
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 
{panel:title=My title}
!Hit 100%.png!
{panel}


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:
!Hit 100%.png!
 


> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
>  
> {panel:title=My title}
> !Hit 100%.png!
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:
!Hit 100%.png!
 

  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 


> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
> !Hit 100%.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Attachment: Hit 100%.png

> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: Hit 100%.png
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 

  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 !Screen Shot 2022-01-18 at 22.01.04.png! 


> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Attachment: Screen Shot 2022-01-18 at 22.01.04.png

> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
> ||BuckCache without RAMBufferBuckCache without RAMBuffer|BucketCache with 
> RAMBuffer|||
> |BuckCache without RAMBuffer|BucketCache with RAMBuffer|



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

 !Screen Shot 2022-01-18 at 22.01.04.png! 

  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

||BuckCache without RAMBufferBuckCache without RAMBuffer|BucketCache with 
RAMBuffer|||
|BuckCache without RAMBuffer|BucketCache with RAMBuffer|



> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
>  !Screen Shot 2022-01-18 at 22.01.04.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Attachment: (was: Screen Shot 2022-01-18 at 22.01.04.png)

> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
>  !Screen Shot 2022-01-18 at 22.01.04.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HBASE-26681) Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread Yutong Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yutong Xiao updated HBASE-26681:

Description: 
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.

I first did a YCSB test to check the performance of RAMBuffer with its hit 
ratio is 100%. The result is:

||BuckCache without RAMBufferBuckCache without RAMBuffer|BucketCache with 
RAMBuffer|||
|BuckCache without RAMBuffer|BucketCache with RAMBuffer|


  was:
In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
HFileBlock when get cached blocks. This rough allocation increases the GC 
pressure for those "hot" blocks. 
Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level block 
is read twice, we cache it in the RAMBuffer. When the block timeout in the 
cache (e.g. 60s), that means the block is not being accessed in 60s, we evict 
it. Not like LRU, we do not cache block when the whole RAMBuffer size reaches 
to a threshold (to fit different workload, the threshold is dynamic). This will 
prevent the RAMBuffer from being churned.


> Introduce a little RAMBuffer for bucketcache to reduce gc and improve 
> throughput
> 
>
> Key: HBASE-26681
> URL: https://issues.apache.org/jira/browse/HBASE-26681
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache, Performance
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> In branch-1, BucketCache just allocate new onheap bytebuffer to construct new 
> HFileBlock when get cached blocks. This rough allocation increases the GC 
> pressure for those "hot" blocks. 
> Here introduce a RAMBuffer for those "hot" blocks in BucketCache. The thought 
> is simple. The RAMBuffer is an timeout expiring cache. When a Multi-level 
> block is read twice, we cache it in the RAMBuffer. When the block timeout in 
> the cache (e.g. 60s), that means the block is not being accessed in 60s, we 
> evict it. Not like LRU, we do not cache block when the whole RAMBuffer size 
> reaches to a threshold (to fit different workload, the threshold is dynamic). 
> This will prevent the RAMBuffer from being churned.
> I first did a YCSB test to check the performance of RAMBuffer with its hit 
> ratio is 100%. The result is:
> ||BuckCache without RAMBufferBuckCache without RAMBuffer|BucketCache with 
> RAMBuffer|||
> |BuckCache without RAMBuffer|BucketCache with RAMBuffer|



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] Apache-HBase commented on pull request #4041: Update pom.xml

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4041:
URL: https://github.com/apache/hbase/pull/4041#issuecomment-1015431600


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 39s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 36s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  46m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4041 |
   | Optional Tests | dupname asflicense javac hadoopcheck xml compile |
   | uname | Linux 40d7c7356af7 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-http U: hbase-http |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] YutSean opened a new pull request #4043: HBASE-26681 Introduce a little RAMBuffer for bucketcache to reduce gc and improve throughput

2022-01-18 Thread GitBox


YutSean opened a new pull request #4043:
URL: https://github.com/apache/hbase/pull/4043


   https://issues.apache.org/jira/browse/HBASE-26681


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4038: HBASE-26552 Introduce retry to logroller when encounters IOException

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4038:
URL: https://github.com/apache/hbase/pull/4038#issuecomment-1015430358


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 30s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 211m 58s |  hbase-server in the patch passed.  
|
   |  |   | 245m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4038/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4038 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ecf557c4adeb 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c9bcd87b34 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4038/1/testReport/
 |
   | Max. process+thread count | 3619 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4038/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #4040: HBASE-26674 Should modify filesCompacting under storeWriteLock

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4040:
URL: https://github.com/apache/hbase/pull/4040#issuecomment-1015426166


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 14s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 29s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  51m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4040/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4040 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux d7579f718965 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 95 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4040/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26679) Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck

2022-01-18 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated HBASE-26679:
-
Description: 
Consider there are three dataNodes: dn1,dn2,and dn3, and we write some data to 
{{FanOutOneBlockAsyncDFSOutput}} and then flush it, there are one {{Callback}} 
in {{FanOutOneBlockAsyncDFSOutput.waitingAckQueue}}.  If the ack from dn1 
arrives firstly and triggers Netty to invoke 
{{FanOutOneBlockAsyncDFSOutput.completed}} with dn1's channel, then in 
{{FanOutOneBlockAsyncDFSOutput.completed}}, dn1's channel is removed from 
{{Callback.unfinishedReplicas}}. 
But dn2 and dn3 respond slowly, before dn2 and dn3 sending ack , dn1 is shut 
down or have a exception, so {{FanOutOneBlockAsyncDFSOutput.failed}} is 
triggered by Netty with dn1's channel, and because the 
{{Callback.unfinishedReplicas}} does not contain dn1's channel,the {{Callback}} 
is skipped in {{FanOutOneBlockAsyncDFSOutput.failed}} method, just as following 
line250, and at line 245, {{FanOutOneBlockAsyncDFSOutput.state}} is set to 
{{State.BROKEN}}.
{code:java}
233  private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
234 if (state == State.BROKEN || state == State.CLOSED) {
235 return;
236  }
 
244// disable further write, and fail all pending ack.
245state = State.BROKEN;
246Throwable error = errorSupplier.get();
247for (Iterator iter = waitingAckQueue.iterator(); 
iter.hasNext();) {
248  Callback c = iter.next();
249  // find the first sync request which we have not acked yet and fail 
all the request after it.
250  if (!c.unfinishedReplicas.contains(channel.id())) {
251continue;
252  }
253  for (;;) {
254c.future.completeExceptionally(error);
255if (!iter.hasNext()) {
256  break;
257}
258c = iter.next();
259  }
260break;
261}
262   datanodeInfoMap.keySet().forEach(ChannelOutboundInvoker::close);
263  }
{code}
At the end of above method in line 262, dn1,dn2 and dn3 are all closed, so the 
{{FanOutOneBlockAsyncDFSOutput.failed}} is triggered again by dn2 and dn3, but 
at the above line 234, because {{FanOutOneBlockAsyncDFSOutput.state}}  is 
already {{State.BROKEN}}, the whole  {{FanOutOneBlockAsyncDFSOutput.failed}}  
is skipped. So wait on the future returned by 
{{FanOutOneBlockAsyncDFSOutput.flush}} would stuck for ever.

When we roll the wal, we would create a new {{FanOutOneBlockAsyncDFSOutput}} 
and a new {{AsyncProtobufLogWriter}}, in {{AsyncProtobufLogWriter.init}} we 
write wal header to {{FanOutOneBlockAsyncDFSOutput}} and wait it to complete. 
If we run into this situation, the roll would stuck forever.

I have simulate this case in the PR, and my fix is even through the  
{{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, we would 
still try to trigger {{Callback.future}}

  was:
Consider there are three dataNodes: dn1,dn2,and dn3, and we write some data to 
{{FanOutOneBlockAsyncDFSOutput}} and then flush it, there are one {{Callback}} 
in {{FanOutOneBlockAsyncDFSOutput.waitingAckQueue}}.  If the ack from dn1 
arrives firstly and triggers Netty to invoke 
{{FanOutOneBlockAsyncDFSOutput.completed}} with dn1's channel, then in 
{{FanOutOneBlockAsyncDFSOutput.completed}}, dn1's channel is removed from 
{{Callback.unfinishedReplicas}}. 
But dn2 and dn3 respond slowly, before dn2 and dn3 sending ack , dn1 is shut 
down or have a exception, so {{FanOutOneBlockAsyncDFSOutput.failed}} is 
triggered by Netty with dn1's channel, and because the 
{{Callback.unfinishedReplicas}} does not contain dn1's channel,the {{Callback}} 
is skipped in {{FanOutOneBlockAsyncDFSOutput.failed}} method, just as following 
line250, and at line 245, {{FanOutOneBlockAsyncDFSOutput.state}} is set to 
{{State.BROKEN}}.
{code:java}
233  private synchronized void failed(Channel channel, Supplier 
errorSupplier) {
234 if (state == State.BROKEN || state == State.CLOSED) {
235 return;
236  }
 
244// disable further write, and fail all pending ack.
245state = State.BROKEN;
246Throwable error = errorSupplier.get();
247for (Iterator iter = waitingAckQueue.iterator(); 
iter.hasNext();) {
248  Callback c = iter.next();
249  // find the first sync request which we have not acked yet and fail 
all the request after it.
250  if (!c.unfinishedReplicas.contains(channel.id())) {
251continue;
252  }
253  for (;;) {
254c.future.completeExceptionally(error);
255if (!iter.hasNext()) {
256  break;
257}
258c = iter.next();
259  }
260break;
261}
262   datanodeInfoMap.keySet().forEach(ChannelOutboundInvoker::close);
263  }
{code}
At the end of above method in line 262, dn1,dn2 and dn3 are all closed, so the 
{{FanOutOneBlockAsyncDFSOutput.failed}} is triggered again by dn2 and dn3, but 
at the above line 

[GitHub] [hbase] Apache-HBase commented on pull request #4041: Update pom.xml

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4041:
URL: https://github.com/apache/hbase/pull/4041#issuecomment-1015420391


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 21s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 55s |  hbase-http in the patch passed.  |
   |  |   |  33m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4041 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5a335c6a9edd 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/testReport/
 |
   | Max. process+thread count | 352 (vs. ulimit of 3) |
   | modules | C: hbase-http U: hbase-http |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26679) Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck

2022-01-18 Thread chenglei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477864#comment-17477864
 ] 

chenglei commented on HBASE-26679:
--

??But looking at the code, I do not think it can only be reproduced by the 
above scenario, as long as a DN responds faster than others and then fails, we 
can run into this situation and cause some future to be stuck forever.??
 Yes, I just take this scenario to illustrate this problem.

> Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck
> -
>
> Key: HBASE-26679
> URL: https://issues.apache.org/jira/browse/HBASE-26679
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0-alpha-2, 2.4.9
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
> Consider there are three dataNodes: dn1,dn2,and dn3, and we write some data 
> to {{FanOutOneBlockAsyncDFSOutput}} and then flush it, there are one 
> {{Callback}} in {{FanOutOneBlockAsyncDFSOutput.waitingAckQueue}}.  If the ack 
> from dn1 arrives firstly and triggers Netty to invoke 
> {{FanOutOneBlockAsyncDFSOutput.completed}} with dn1's channel, then in 
> {{FanOutOneBlockAsyncDFSOutput.completed}}, dn1's channel is removed from 
> {{Callback.unfinishedReplicas}}. 
> But dn2 and dn3 respond slowly, before dn2 and dn3 sending ack , dn1 is shut 
> down or have a exception, so {{FanOutOneBlockAsyncDFSOutput.failed}} is 
> triggered by Netty with dn1's channel, and because the 
> {{Callback.unfinishedReplicas}} does not contain dn1's channel,the 
> {{Callback}} is skipped in {{FanOutOneBlockAsyncDFSOutput.failed}} method, 
> just as following line250, and at line 245, 
> {{FanOutOneBlockAsyncDFSOutput.state}} is set to {{State.BROKEN}}.
> {code:java}
> 233  private synchronized void failed(Channel channel, Supplier 
> errorSupplier) {
> 234 if (state == State.BROKEN || state == State.CLOSED) {
> 235 return;
> 236  }
>  
> 244// disable further write, and fail all pending ack.
> 245state = State.BROKEN;
> 246Throwable error = errorSupplier.get();
> 247for (Iterator iter = waitingAckQueue.iterator(); 
> iter.hasNext();) {
> 248  Callback c = iter.next();
> 249  // find the first sync request which we have not acked yet and fail 
> all the request after it.
> 250  if (!c.unfinishedReplicas.contains(channel.id())) {
> 251continue;
> 252  }
> 253  for (;;) {
> 254c.future.completeExceptionally(error);
> 255if (!iter.hasNext()) {
> 256  break;
> 257}
> 258c = iter.next();
> 259  }
> 260break;
> 261}
> 262   datanodeInfoMap.keySet().forEach(ChannelOutboundInvoker::close);
> 263  }
> {code}
> At the end of above method in line 262, dn1,dn2 and dn3 are all closed, so 
> the {{FanOutOneBlockAsyncDFSOutput.failed}} is triggered again by dn2 and 
> dn3, but at the above line 234, because 
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, the 
> whole  {{FanOutOneBlockAsyncDFSOutput.failed}}  is skipped. So the wait on 
> the future returned by {{FanOutOneBlockAsyncDFSOutput.flush}} would stuck for 
> ever.
> When we roll the wal, we would create a new {{FanOutOneBlockAsyncDFSOutput}} 
> and a new {{AsyncProtobufLogWriter}}, in {{AsyncProtobufLogWriter.init}} we 
> write wal header to {{FanOutOneBlockAsyncDFSOutput}} and wait it to complete. 
> If we run into this situation, the roll would stuck forever.
> I have simulate this case in the PR, and my fix is even through the  
> {{FanOutOneBlockAsyncDFSOutput.state}}  is already {{State.BROKEN}}, we would 
> still try to trigger {{Callback.future}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hbase] Apache-HBase commented on pull request #4041: Update pom.xml

2022-01-18 Thread GitBox


Apache-HBase commented on pull request #4041:
URL: https://github.com/apache/hbase/pull/4041#issuecomment-1015417111


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 27s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 49s |  hbase-http in the patch passed.  |
   |  |   |  28m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4041 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux d61bff2704ee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4a94cfccc9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/testReport/
 |
   | Max. process+thread count | 360 (vs. ulimit of 3) |
   | modules | C: hbase-http U: hbase-http |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-4041/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2022-01-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477853#comment-17477853
 ] 

Viraj Jasani edited comment on HBASE-20503 at 1/18/22, 1:28 PM:


Upgrading hbase 1.6.0 to 2.4.6/7, facing the similar issue:
{code:java}
2022-01-18 12:45:59,051 WARN  [Close-WAL-Writer-4] 
wal.AbstractProtobufLogWriter - Failed to write trailer, non-fatal, 
continuing...
java.io.IOException: stream already broken
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:420)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:509)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:231)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:187)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:222)
at 
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:261)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:157)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$closeWriter$5(AsyncFSWAL.java:698)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 12:45:59,051 WARN  [Close-WAL-Writer-4] wal.AsyncProtobufLogWriter - 
normal close failed, try recover
java.lang.IllegalStateException: should call flush first before calling close
at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkState(Preconditions.java:510)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.endBlock(FanOutOneBlockAsyncDFSOutput.java:514)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.close(FanOutOneBlockAsyncDFSOutput.java:565)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:158)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$closeWriter$5(AsyncFSWAL.java:698)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 12:46:14,054 WARN  [hbase] wal.AsyncFSWAL - sync failed
java.io.IOException: Timeout(15000ms) waiting for response
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.lambda$userEventTriggered$4(FanOutOneBlockAsyncDFSOutput.java:300)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.failed(FanOutOneBlockAsyncDFSOutput.java:233)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.access$300(FanOutOneBlockAsyncDFSOutput.java:98)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.userEventTriggered(FanOutOneBlockAsyncDFSOutput.java:299)
 {code}
Clients no longer able to continue ingestion due to write failures, happens 
only after masters are upgraded to 2.4.x. Hadoop running on 2.10.1, no 
datanodes or namenodes restarted.

Two frequent WARN logs observed when AsyncFSWAL came into picture after hmaster 
upgrade:
{code:java}
2022-01-18 12:41:58,411 WARN  [ype=LAST_IN_PIPELINE] datanode.DataNode - 
IOException in BlockReceiver.run(): 
java.io.IOException: Shutting down writer and responder due to a checksum error 
in received data. The error response has been sent upstream.
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1647)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 11:45:25,779 WARN  [t/disk1/hdfs/current] 
impl.FsDatasetAsyncDiskService - sync_file_range error
EBADF: Bad file descriptor
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.sync_file_range(Native 
Method)
at 
org.apache.hadoop.io.nativeio.NativeIO$POSIX.syncFileRangeIfPossible(NativeIO.java:287)
at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.syncFileRange(FileIoProvider.java:189)
at 

[jira] [Resolved] (HBASE-26662) User.createUserForTesting should not reset UserProvider.groups every time if hbase.group.service.for.test.only is true

2022-01-18 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-26662.
--
Resolution: Fixed

Thanks for reviewing it, [~elserj] , [~zhangduo] !

> User.createUserForTesting should not reset UserProvider.groups every time if 
> hbase.group.service.for.test.only is true
> --
>
> Key: HBASE-26662
> URL: https://issues.apache.org/jira/browse/HBASE-26662
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.4.9, 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.10
>
>
> The _if check_ below will always unnecessarily reset static var 
> _UserProvider.groups_ to a newly created instance of TestingGroups every time 
> `User.createUserForTesting` is called.
> {noformat}
> if (!(UserProvider.groups instanceof TestingGroups) ||
> conf.getBoolean(TestingGroups.TEST_CONF, false)) {
>   UserProvider.groups = new TestingGroups(UserProvider.groups);
> }
> {noformat}
> For tests creating multiple {_}test users{_}, this causes the latest created 
> user to reset _groups_ and all previously created users would now have to be 
> available on the {_}User.underlyingImplementation{_}, which not always will 
> be true.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2022-01-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477853#comment-17477853
 ] 

Viraj Jasani commented on HBASE-20503:
--

Upgrading hbase 1.6.0 to 2.4.6/7, facing the similar issue:
{code:java}
2022-01-18 12:45:59,051 WARN  [Close-WAL-Writer-4] 
wal.AbstractProtobufLogWriter - Failed to write trailer, non-fatal, 
continuing...
java.io.IOException: stream already broken
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:420)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:509)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.lambda$writeWALTrailerAndMagic$3(AsyncProtobufLogWriter.java:231)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:187)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeWALTrailerAndMagic(AsyncProtobufLogWriter.java:222)
at 
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:261)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:157)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$closeWriter$5(AsyncFSWAL.java:698)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 12:45:59,051 WARN  [Close-WAL-Writer-4] wal.AsyncProtobufLogWriter - 
normal close failed, try recover
java.lang.IllegalStateException: should call flush first before calling close
at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkState(Preconditions.java:510)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.endBlock(FanOutOneBlockAsyncDFSOutput.java:514)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.close(FanOutOneBlockAsyncDFSOutput.java:565)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.close(AsyncProtobufLogWriter.java:158)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.lambda$closeWriter$5(AsyncFSWAL.java:698)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 12:46:14,054 WARN  [hbase] wal.AsyncFSWAL - sync failed
java.io.IOException: Timeout(15000ms) waiting for response
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.lambda$userEventTriggered$4(FanOutOneBlockAsyncDFSOutput.java:300)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.failed(FanOutOneBlockAsyncDFSOutput.java:233)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.access$300(FanOutOneBlockAsyncDFSOutput.java:98)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.userEventTriggered(FanOutOneBlockAsyncDFSOutput.java:299)
 {code}
Clients no longer able to continue ingestion due to write failures, happens 
only after masters are upgraded to 2.4.x. Hadoop running on 2.10.1, no 
datanodes or namenodes restarted.

Two frequent WARN logs observed when AsyncFSWAL came into picture after hmaster 
upgrade:
{code:java}
2022-01-18 12:41:58,411 WARN  [ype=LAST_IN_PIPELINE] datanode.DataNode - 
IOException in BlockReceiver.run(): 
java.io.IOException: Shutting down writer and responder due to a checksum error 
in received data. The error response has been sent upstream.
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1647)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478)
at java.lang.Thread.run(Thread.java:748) {code}
{code:java}
2022-01-18 11:45:25,779 WARN  [t/disk1/hdfs/current] 
impl.FsDatasetAsyncDiskService - sync_file_range error
EBADF: Bad file descriptor
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.sync_file_range(Native 
Method)
at 
org.apache.hadoop.io.nativeio.NativeIO$POSIX.syncFileRangeIfPossible(NativeIO.java:287)
at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.syncFileRange(FileIoProvider.java:189)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams.syncFileRangeIfPossible(ReplicaOutputStreams.java:154)
at 

[GitHub] [hbase] sairampola opened a new pull request #4042: HBASE-26660 delayed FlushRegionEntry should be removed when we need a non-delayed one

2022-01-18 Thread GitBox


sairampola opened a new pull request #4042:
URL: https://github.com/apache/hbase/pull/4042


   Backporting HBASE-25643


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26662) User.createUserForTesting should not reset UserProvider.groups every time if hbase.group.service.for.test.only is true

2022-01-18 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26662:
-
Fix Version/s: 2.5.0
   2.4.10

> User.createUserForTesting should not reset UserProvider.groups every time if 
> hbase.group.service.for.test.only is true
> --
>
> Key: HBASE-26662
> URL: https://issues.apache.org/jira/browse/HBASE-26662
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.4.9, 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.10
>
>
> The _if check_ below will always unnecessarily reset static var 
> _UserProvider.groups_ to a newly created instance of TestingGroups every time 
> `User.createUserForTesting` is called.
> {noformat}
> if (!(UserProvider.groups instanceof TestingGroups) ||
> conf.getBoolean(TestingGroups.TEST_CONF, false)) {
>   UserProvider.groups = new TestingGroups(UserProvider.groups);
> }
> {noformat}
> For tests creating multiple {_}test users{_}, this causes the latest created 
> user to reset _groups_ and all previously created users would now have to be 
> available on the {_}User.underlyingImplementation{_}, which not always will 
> be true.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   >