[jira] [Comment Edited] (HBASE-25709) Close region may stuck when region is compacting and skipped most cells read

2022-06-09 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552539#comment-17552539
 ] 

Xiaolin Ha edited comment on HBASE-25709 at 6/10/22 4:41 AM:
-

Hi, [~vjasani] , I looked at PHOENIX-6702 and your patch, do you doubt that 
this issue breaks large rows results? I added a UT 
TestHRegion#testTTLsUsingSmallHeartBeatCells to verify the scan can return the 
whole row when row cells count is larger than 
StoreScanner.HBASE_CELLS_SCANNED_PER_HEARTBEAT_CHECK in just once next(), 
please take a look. Thanks.


was (Author: xiaolin ha):
Hi, [~vjasani] , I looked at PHOENIX-6702 and your patch, do you doubt that 
this issue breaks large rows results? I added a UT 
TestHRegion#testTTLsUsingSmallHeartBeatCells to test verify the scan can return 
the whole row when row cells count is larger than 
StoreScanner.HBASE_CELLS_SCANNED_PER_HEARTBEAT_CHECK, please take a look. 
Thanks.

> Close region may stuck when region is compacting and skipped most cells read
> 
>
> Key: HBASE-25709
> URL: https://issues.apache.org/jira/browse/HBASE-25709
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.7.1, 3.0.0-alpha-2, 2.4.10
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11
>
> Attachments: Master-UI-RIT.png, RS-region-state.png
>
>
> We found in our cluster about stop region stuck. The region is compacting, 
> and its store files has many TTL expired cells. Close region state 
> marker(HRegion#writestate.writesEnabled) is not checked in compaction, 
> because most cells were skipped. 
> !RS-region-state.png|width=698,height=310!
>  
> !Master-UI-RIT.png|width=693,height=157!
>  
> HBASE-23968 has encountered similar problem, but the solution in it is outer 
> the method
> InternalScanner#next(List result, ScannerContext scannerContext), which 
> will not return if there are many skipped cells, for current compaction 
> scanner context. As a result, we need to return in time in the next method, 
> and then check the stop marker.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-25709) Close region may stuck when region is compacting and skipped most cells read

2022-06-09 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552539#comment-17552539
 ] 

Xiaolin Ha commented on HBASE-25709:


Hi, [~vjasani] , I looked at PHOENIX-6702 and your patch, do you doubt that 
this issue breaks large rows results? I added a UT 
TestHRegion#testTTLsUsingSmallHeartBeatCells to test verify the scan can return 
the whole row when row cells count is larger than 
StoreScanner.HBASE_CELLS_SCANNED_PER_HEARTBEAT_CHECK, please take a look. 
Thanks.

> Close region may stuck when region is compacting and skipped most cells read
> 
>
> Key: HBASE-25709
> URL: https://issues.apache.org/jira/browse/HBASE-25709
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.7.1, 3.0.0-alpha-2, 2.4.10
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11
>
> Attachments: Master-UI-RIT.png, RS-region-state.png
>
>
> We found in our cluster about stop region stuck. The region is compacting, 
> and its store files has many TTL expired cells. Close region state 
> marker(HRegion#writestate.writesEnabled) is not checked in compaction, 
> because most cells were skipped. 
> !RS-region-state.png|width=698,height=310!
>  
> !Master-UI-RIT.png|width=693,height=157!
>  
> HBASE-23968 has encountered similar problem, but the solution in it is outer 
> the method
> InternalScanner#next(List result, ScannerContext scannerContext), which 
> will not return if there are many skipped cells, for current compaction 
> scanner context. As a result, we need to return in time in the next method, 
> and then check the stop marker.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552529#comment-17552529
 ] 

Lijin Bin commented on HBASE-27097:
---

[~vjasani] No,do not have ut to reproduce it。

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> 

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552517#comment-17552517
 ] 

Viraj Jasani commented on HBASE-27097:
--

Thanks [~binlijin] for the patch! I was wondering if you have any UT that can 
break RPC calls with SimpleRpcServer impl without this patch. I tried writing 
few tests with minicluster and they were passing with SimpleRpcServer, hence I 
was wondering how come we see broken SimpleRpcServer only while testing on 
distributed cluster.

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> 

[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552505#comment-17552505
 ] 

Duo Zhang commented on HBASE-26708:
---

Actually, when auth-int or auth-conf is used, we will copy the bytes from 
netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> 

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552502#comment-17552502
 ] 

Lijin Bin commented on HBASE-27097:
---

[~apurtell] what about apply the patch “MultiByteBuff.patch” , which fix a bug?

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> 

[jira] [Updated] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-27097:
--
Attachment: MultiByteBuff.patch

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> 

[jira] [Comment Edited] (HBASE-25709) Close region may stuck when region is compacting and skipped most cells read

2022-06-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552495#comment-17552495
 ] 

Viraj Jasani edited comment on HBASE-25709 at 6/10/22 1:32 AM:
---

FYI, it seems that this change has likely caused a regression in Phoenix 
indexing test PHOENIX-6702.


was (Author: vjasani):
FYI, we see one regression in Phoenix indexing test (PHOENIX-6702) after this 
commit.

> Close region may stuck when region is compacting and skipped most cells read
> 
>
> Key: HBASE-25709
> URL: https://issues.apache.org/jira/browse/HBASE-25709
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.7.1, 3.0.0-alpha-2, 2.4.10
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11
>
> Attachments: Master-UI-RIT.png, RS-region-state.png
>
>
> We found in our cluster about stop region stuck. The region is compacting, 
> and its store files has many TTL expired cells. Close region state 
> marker(HRegion#writestate.writesEnabled) is not checked in compaction, 
> because most cells were skipped. 
> !RS-region-state.png|width=698,height=310!
>  
> !Master-UI-RIT.png|width=693,height=157!
>  
> HBASE-23968 has encountered similar problem, but the solution in it is outer 
> the method
> InternalScanner#next(List result, ScannerContext scannerContext), which 
> will not return if there are many skipped cells, for current compaction 
> scanner context. As a result, we need to return in time in the next method, 
> and then check the stop marker.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552496#comment-17552496
 ] 

Viraj Jasani commented on HBASE-26708:
--

{quote}[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered.
{quote}
Sounds good [~apurtell], this is worth exploring. Will spend some time, thanks!

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> 

[jira] [Commented] (HBASE-25709) Close region may stuck when region is compacting and skipped most cells read

2022-06-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552495#comment-17552495
 ] 

Viraj Jasani commented on HBASE-25709:
--

FYI, we see one regression in Phoenix indexing test (PHOENIX-6702) after this 
commit.

> Close region may stuck when region is compacting and skipped most cells read
> 
>
> Key: HBASE-25709
> URL: https://issues.apache.org/jira/browse/HBASE-25709
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.7.1, 3.0.0-alpha-2, 2.4.10
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11
>
> Attachments: Master-UI-RIT.png, RS-region-state.png
>
>
> We found in our cluster about stop region stuck. The region is compacting, 
> and its store files has many TTL expired cells. Close region state 
> marker(HRegion#writestate.writesEnabled) is not checked in compaction, 
> because most cells were skipped. 
> !RS-region-state.png|width=698,height=310!
>  
> !Master-UI-RIT.png|width=693,height=157!
>  
> HBASE-23968 has encountered similar problem, but the solution in it is outer 
> the method
> InternalScanner#next(List result, ScannerContext scannerContext), which 
> will not return if there are many skipped cells, for current compaction 
> scanner context. As a result, we need to return in time in the next method, 
> and then check the stop marker.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27066) The Region Visualizer display failed

2022-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552488#comment-17552488
 ] 

Hudson commented on HBASE-27066:


Results for branch branch-2
[build #563 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/563/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/563/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/563/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/563/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/563/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Region Visualizer display failed
> 
>
> Key: HBASE-27066
> URL: https://issues.apache.org/jira/browse/HBASE-27066
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: 27066-xso.jpg, image-2022-05-27-14-22-29-015.png, 
> image-2022-05-27-14-22-44-336.png
>
>
> The `Region Visualizer` display failed. Because the active master hostname is 
> `localhost`.
> Before the change:
> !image-2022-05-27-14-22-44-336.png|width=520,height=162!
> After the change:
> !image-2022-05-27-14-22-29-015.png|width=562,height=229!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27066) The Region Visualizer display failed

2022-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552479#comment-17552479
 ] 

Hudson commented on HBASE-27066:


Results for branch branch-2.5
[build #138 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/138/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/138/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/138/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/138/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/138/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Region Visualizer display failed
> 
>
> Key: HBASE-27066
> URL: https://issues.apache.org/jira/browse/HBASE-27066
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: 27066-xso.jpg, image-2022-05-27-14-22-29-015.png, 
> image-2022-05-27-14-22-44-336.png
>
>
> The `Region Visualizer` display failed. Because the active master hostname is 
> `localhost`.
> Before the change:
> !image-2022-05-27-14-22-44-336.png|width=520,height=162!
> After the change:
> !image-2022-05-27-14-22-29-015.png|width=562,height=229!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-27097:

Fix Version/s: 2.5.0
   3.0.0-alpha-3

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> 

[jira] [Updated] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-27097:

Priority: Blocker  (was: Critical)

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutateRequest.(ClientProtos.java:14251)
>  

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552473#comment-17552473
 ] 

Andrew Kyle Purtell commented on HBASE-27097:
-

I was taking with [~vjasani] offline and he feels SimpleRpcServer deserves 
fixes and stability. 

I think the consensus is keep SimpleRpcServer, but in its current state it is 
not releasable, so I will raise the priority of this issue to Blocker. 

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Critical
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> 

[jira] [Commented] (HBASE-27066) The Region Visualizer display failed

2022-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552471#comment-17552471
 ] 

Hudson commented on HBASE-27066:


Results for branch master
[build #607 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/607/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/607/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/607/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/607/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Region Visualizer display failed
> 
>
> Key: HBASE-27066
> URL: https://issues.apache.org/jira/browse/HBASE-27066
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-3
>
> Attachments: 27066-xso.jpg, image-2022-05-27-14-22-29-015.png, 
> image-2022-05-27-14-22-44-336.png
>
>
> The `Region Visualizer` display failed. Because the active master hostname is 
> `localhost`.
> Before the change:
> !image-2022-05-27-14-22-44-336.png|width=520,height=162!
> After the change:
> !image-2022-05-27-14-22-29-015.png|width=562,height=229!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:15 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10), but 
now netty may be able to queue up a lot more, in comparison, because netty has 
been designed for concurrency. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_in_flight_max x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer

I could be looking at something that isn't the real issue but it is notable.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10), but 
now netty may be able to queue up a lot more, in comparison, because netty has 
been designed for concurrency. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:14 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10), but 
now netty may be able to queue up a lot more, in comparison, because netty has 
been designed for concurrency. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer

I could be looking at something that isn't the real issue but it is notable.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10), but 
now netty may be able to queue up a lot more, in comparison*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:13 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10), but 
now netty may be able to queue up a lot more, in comparison*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer

I could be looking at something that isn't the real issue but it is notable.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:12 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer

I could be looking at something that isn't the real issue but it is notable.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:09 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits (hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 
This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:09 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits ("hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 

This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here. It may also explain why security makes it worse, because when 
security is active we wrap (encrypt) and unwrap (decrypt) up in the call layer, 
beyond netty, and that takes additional time there, which would back things up 
at the netty layer more than if call handling would complete more quickly 
without encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, *because SimpleRpcServer 
had thread limits (hbase.ipc.server.read.threadpool.size", default 10) and was 
not async, but now netty is able to queue up a lot of work asynchronously*. 
This is going to be somewhat application dependent too. If the application 
interacts synchronously with calls and has its own bound, then in flight 
requests or their network level handling will be bounded by the aggregate 
(client_limit x number_of_clients). If the application is highly async, 
write-mostly, or a load test client – which is typically write-mostly, async, 
and configured with large bounds :) – then this can explain the findings 
reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:06 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks probably should not be INT_MAX, but that may matter less.

The goal would be to limit concurrency at the netty layer in such a way that:
1. Performance is still good
2. Under load, we don't balloon resource usage at the netty layer


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.

> Netty "leak detected" and 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:04 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

And this may also explain why security makes it worse, because when security is 
active we wrap (encrypt) and unwrap (decrypt) up in the call layer, beyond 
netty, and that takes additional time there, which would back things up at the 
netty layer more than if call handling would complete more quickly without 
encryption.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 11:01 PM:
--

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. This is going to be somewhat application 
dependent too. If the application interacts synchronously with calls and has 
its own bound, then in flight requests or their network level handling will be 
bounded by the aggregate (client_limit x number_of_clients). If the application 
is highly async, write-mostly, or a load test client – which is typically 
write-mostly, async, and configured with large bounds :) – then this can 
explain the findings reported here.

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.


was (Author: apurtell):
On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. 

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> 

[GitHub] [hbase] Apache-HBase commented on pull request #4515: HBASE-27105 HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4515:
URL: https://github.com/apache/hbase/pull/4515#issuecomment-1151695968

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 10s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 219m 12s |  hbase-server in the patch failed.  |
   |  |   | 249m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4515 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b1b8c7b01f84 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/testReport/
 |
   | Max. process+thread count | 2453 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552464#comment-17552464
 ] 

Andrew Kyle Purtell commented on HBASE-26708:
-

On the subject of configuration and NettyRpcServer, we leave netty level 
resource limits unbounded. The number of threads to use for the event loop is 
default 0 (unbounded). The default for io.netty.eventLoop.maxPendingTasks is 
INT_MAX. We don't do this for our own RPC handlers. We have a notion of maximum 
handler pool size, with a default of 30, typically raised in production by the 
user. We constrain the depth of the request queue in multiple ways... limits on 
number of queued calls, limits on total size of calls data that can be queued 
(to avoid memory usage overrun, just like this case), CoDel conditioning of the 
call queues if it is enabled, and so on.

Under load can we pile up a excess of pending request state, such as direct 
buffers containing request bytes, at the netty layer because of downstream 
resource limits? Those limits will act as a bottleneck, as intended, and before 
would have also applied backpressure through RPC too, but now netty is able to 
queue up a lot of work asynchronously. 

Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
(unbounded). I don't know what it can actually get up to in production, because 
we lack the metric, but there are diminishing returns when threads > cores so a 
reasonable default here could be Runtime.getRuntime().availableProcessors() 
instead of unbounded?

maxPendingTasks should not be INT_MAX, that's not a sane default.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> 

[GitHub] [hbase] Apache-HBase commented on pull request #4515: HBASE-27105 HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4515:
URL: https://github.com/apache/hbase/pull/4515#issuecomment-1151685613

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 208m 55s |  hbase-server in the patch failed.  |
   |  |   | 227m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4515 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 950f7ad72342 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/testReport/
 |
   | Max. process+thread count | 2422 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4493: HBASE-27091 Speed up the loading of table descriptor from filesystem

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4493:
URL: https://github.com/apache/hbase/pull/4493#issuecomment-1151653020

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 56s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 24s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m 38s |  hbase-server in the patch passed.  
|
   |  |   | 216m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4493 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 12ef33a287cc 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/4/testReport/
 |
   | Max. process+thread count | 2413 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4487: Backport "HBASE-26366 Provide meaningful parent spans to ZK interactions" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4487:
URL: https://github.com/apache/hbase/pull/4487#issuecomment-1151561899

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 23s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 22s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  branch-2 passed  |
   | -0 :warning: |  patch  |   6m 13s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 44s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 42s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 229m 48s |  hbase-server in the patch passed.  
|
   |  |   | 261m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4487 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 72d6ffce8f64 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b3c9ef34b7 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/testReport/
 |
   | Max. process+thread count | 2434 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4515: HBASE-27105 HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4515:
URL: https://github.com/apache/hbase/pull/4515#issuecomment-1151551626

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 34s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 51s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 59s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 54s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 52s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 54s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 19s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4515 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux bb5a20706b13 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 68 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4515/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4459: HBASE-26366 Provide meaningful parent spans to ZK interactions

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4459:
URL: https://github.com/apache/hbase/pull/4459#issuecomment-1151550300

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 17s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  master passed  |
   | -0 :warning: |  patch  |   7m  3s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 39s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 16s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 39s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 235m 53s |  hbase-server in the patch passed.  
|
   |  |   | 265m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4459 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7b6103d087f2 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/testReport/
 |
   | Max. process+thread count | 2407 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4487: Backport "HBASE-26366 Provide meaningful parent spans to ZK interactions" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4487:
URL: https://github.com/apache/hbase/pull/4487#issuecomment-1151539767

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 59s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  branch-2 passed  |
   | -0 :warning: |  patch  |   6m 38s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 54s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 37s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 206m 44s |  hbase-server in the patch passed.  
|
   |  |   | 237m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4487 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9b17373570b5 5.4.0-1068-aws #72~18.04.1-Ubuntu SMP Thu Mar 3 
08:49:49 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b3c9ef34b7 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/testReport/
 |
   | Max. process+thread count | 2314 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4459: HBASE-26366 Provide meaningful parent spans to ZK interactions

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4459:
URL: https://github.com/apache/hbase/pull/4459#issuecomment-1151528095

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 24s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  master passed  |
   | -0 :warning: |  patch  |   6m  4s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  3s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 14s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 38s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 201m 54s |  hbase-server in the patch passed.  
|
   |  |   | 230m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4459 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5239da22b171 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/testReport/
 |
   | Max. process+thread count | 2487 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4493: HBASE-27091 Speed up the loading of table descriptor from filesystem

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4493:
URL: https://github.com/apache/hbase/pull/4493#issuecomment-1151482934

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 21s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 42s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 40s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  42m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4493 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux af0d37d85854 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 68 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27105) HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread Pankaj Kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-27105:
-
Fix Version/s: 3.0.0-alpha-3
   Status: Patch Available  (was: In Progress)

> HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout
> --
>
> Key: HBASE-27105
> URL: https://issues.apache.org/jira/browse/HBASE-27105
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 3.0.0-alpha-3
>
>
> HBASE-23293 introduced replication.source.shipedits.timeout which is 
> adaptive, ReplicationSourceShipper#shipEdits() set the adaptive timeout based 
> on retries. But on CallTimeoutException in 
> HBaseInterClusterReplicationEndpoint#replicate(), it keep retrying the 
> replication after sleep with same timeout value.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work started] (HBASE-27105) HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread Pankaj Kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27105 started by Pankaj Kumar.

> HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout
> --
>
> Key: HBASE-27105
> URL: https://issues.apache.org/jira/browse/HBASE-27105
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
>
> HBASE-23293 introduced replication.source.shipedits.timeout which is 
> adaptive, ReplicationSourceShipper#shipEdits() set the adaptive timeout based 
> on retries. But on CallTimeoutException in 
> HBaseInterClusterReplicationEndpoint#replicate(), it keep retrying the 
> replication after sleep with same timeout value.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151467959

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  master passed  |
   | -1 :x: |  refguide  |   0m 49s |  branch has 7 errors when building the 
reference guide.  |
   | +1 :green_heart: |  spotless  |   0m 38s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  0s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  refguide  |   0m 17s |  patch has 7 errors when building the 
reference guide.  |
   | +1 :green_heart: |  spotless  |   0m 44s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  12m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests | dupname asflicense checkstyle spotless xml refguide |
   | uname | Linux 68c90b92a86e 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | refguide | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/artifact/yetus-general-check/output/branch-refguide.log
 |
   | refguide | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/artifact/yetus-general-check/output/patch-refguide.log
 |
   | Max. process+thread count | 69 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151453320

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests |  |
   | uname | Linux 98852d1ac64f 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Max. process+thread count | 29 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151453005

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests |  |
   | uname | Linux c3e3d7b3801d 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c24ba54147 |
   | Max. process+thread count | 36 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4418: HBASE-26969:Eliminate MOB renames when SFT is enabled

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4418:
URL: https://github.com/apache/hbase/pull/4418#issuecomment-1151432371

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 54s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 42s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   3m  0s |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -1 :x: |  shadedjars  |   4m 19s |  patch has 66 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 308m 16s |  hbase-server in the patch failed.  |
   |  |   | 332m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4418 |
   | JIRA Issue | HBASE-26969 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7c38c8d1f1e3 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt
 |
   | shadedjars | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/testReport/
 |
   | Max. process+thread count | 2592 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4493: HBASE-27091 Speed up the loading of table descriptor from filesystem

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4493:
URL: https://github.com/apache/hbase/pull/4493#issuecomment-1151426753

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 44s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 300m  0s |  hbase-server in the patch failed.  |
   |  |   | 326m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4493 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6ea5b98c7dea 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/testReport/
 |
   | Max. process+thread count | 2530 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26666) Address bearer token being sent over wire before RPC encryption is enabled

2022-06-09 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552344#comment-17552344
 ] 

Bryan Beaudreault commented on HBASE-2:
---

[~andor] any chance we can clean up this Jira summary/description or make sure 
we're on the same page? My understanding is you were originally working on SASL 
bearer token, but pivoted to implement native TLS as a pre-requisite. I 
mentioned this issue in HBASE-26708 and was met with some (reasonable) 
confusion as to the state of things from [~apurtell]:
{quote}[~bbeaudreault]  I was/am confused by that because HBASE-2 is a 
child of HBASE-26553 which describes itself as "OAuth Bearer authentication 
mech plugin for SASL". Can you or someone clean this up so we can clearly see 
what is going on? Is it really a full TLS RPC stack? Because it looks to me 
like some TLS fiddling to get a token that then sets up the usual wrapped SASL 
connection, possibly why I am confused. That would not be native TLS support in 
the sense I mean and the sense that is really required, possibly why it has not 
gotten enough attention. 
{quote}
So the question at hand is whether the implementation you have in 
[https://github.com/apache/hbase/pull/4125] is actually native TLS support that 
can stand on its own, or whether it's tied the token stuff you're working on.

It would be beneficial to clarify the Jira, because it might drive more 
interest in getting people to code review so we can push across the finish line.

> Address bearer token being sent over wire before RPC encryption is enabled
> --
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Andor Molnar
>Priority: Major
> Fix For: HBASE-26553
>
>
> Today, HBase must complete the SASL handshake (saslClient.complete()) prior 
> to turning on any RPC encryption (hbase.rpc.protection=privacy, 
> sasl.QOP=auth-conf).
> This is a problem because we have to transmit the bearer token to the server 
> before we can complete the sasl handshake. This would mean that we would 
> insecurely transmit the bearer token (which is equivalent to any other 
> password) which is a bad smell.
> Ideally, if we can solve this problem for the oauth bearer mechanism, we 
> could also apply it to our delegation token interface for digest-md5 (which, 
> I believe, suffers the same problem).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552343#comment-17552343
 ] 

Bryan Beaudreault commented on HBASE-26708:
---

Will do. I'm going to copy your comment and cc you in HBASE-2 so we keep 
that convo separate from this. Gonna try to get Andor (author of PR) to chime 
in there.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552340#comment-17552340
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 5:30 PM:
-

[~bbeaudreault]  I was/am confused by that because HBASE-2 is a child of 
HBASE-26553 which describes itself as "OAuth Bearer authentication mech plugin 
for SASL". Can you or someone clean this up so we can clearly see what is going 
on? Is it really a full TLS RPC stack? Because it looks to me like some TLS 
fiddling to get a token that then sets up the usual wrapped SASL connection, 
possibly why I am confused. That would not be native TLS support in the sense I 
mean and the sense that is really required, possibly why it has not gotten 
enough attention. 

Oh, the PR itself describes the work as "HBASE-2 Add native TLS encryption 
support to RPC server/client ". That is much different. 

Let's clean up the situation with HBASE-2 and HBASE-26553 and take the 
conversation there so as not to distract from this JIRA.


was (Author: apurtell):
[~bbeaudreault]  I was/am confused by that because HBASE-2 is a child of 
HBASE-26553 which describes itself as "OAuth Bearer authentication mech plugin 
for SASL". Can you or someone clean this up so we can clearly see what is going 
on? Is it really a full TLS RPC stack? Because it looks to me like some TLS 
fiddling to get a token that then sets up the usual wrapped SASL connection, 
possibly why I am confused. That would not be native TLS support in the sense I 
mean and the sense that is really required, possibly why it has not gotten 
enough attention. 

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552340#comment-17552340
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 5:28 PM:
-

[~bbeaudreault]  I was/am confused by that because HBASE-2 is a child of 
HBASE-26553 which describes itself as "OAuth Bearer authentication mech plugin 
for SASL". Can you or someone clean this up so we can clearly see what is going 
on? Is it really a full TLS RPC stack? Because it looks to me like some TLS 
fiddling to get a token that then sets up the usual wrapped SASL connection, 
possibly why I am confused. That would not be native TLS support in the sense I 
mean and the sense that is really required, possibly why it has not gotten 
enough attention. 


was (Author: apurtell):
[~bbeaudreault]  I was/am confused by that because HBASE-2 is a child of 
HBASE-26553 which describes itself as "OAuth Bearer authentication mech plugin 
for SASL". Can you or someone clean this up so we can clearly see what is going 
on? Is it really a full TLS RPC stack? Because it looks to me like some TLS 
fiddling to get a token that then sets up the usual wrapped SASL connection. It 
is not native TLS support in the sense I mean and the sense that is really 
required, which is TLS and only TLS end to end, possibly why it has not gotten 
enough attention. 

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> 

[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552340#comment-17552340
 ] 

Andrew Kyle Purtell commented on HBASE-26708:
-

[~bbeaudreault]  I was/am confused by that because HBASE-2 is a child of 
HBASE-26553 which describes itself as "OAuth Bearer authentication mech plugin 
for SASL". Can you or someone clean this up so we can clearly see what is going 
on? Is it really a full TLS RPC stack? Because it looks to me like some TLS 
fiddling to get a token that then sets up the usual wrapped SASL connection. It 
is not native TLS support in the sense I mean and the sense that is really 
required, which is TLS and only TLS end to end, possibly why it has not gotten 
enough attention. 

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> 

[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552335#comment-17552335
 ] 

Bryan Beaudreault commented on HBASE-26708:
---

I don't want to derail the netty leak topic too much, but there is a PR up for 
the first round of native TLS support: 
[https://github.com/apache/hbase/pull/4125.] It seems like it's mostly finished 
and has gone through some review already, but needs some final approval from 
someone with netty expertise ideally. 

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552322#comment-17552322
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 5:15 PM:
-

[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with {_}auth{_}, which was the previous setting.

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered. 

If I am correct then we should investigate how to get direct IO buffers freed 
faster and/or limits or pacing applied to their allocation; using a custom 
allocator, possibly. Like [~zhangduo] mentioned we set up a certain number of 
buffers, depending, more when sasl is used. This should be tunable? People with 
large RAM servers/instances can tune it up? People with more memory constrained 
options can tune it down?

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. I can see in the 
sources, via my IDE, that the shader rewrote the string constants containing 
the property keys too. Various resources on the Internet will offer 
documentation and suggestions, but because we relocated Netty into thirdparty, 
the names have changed, and so naively following the advice on StackOverflow 
and other places will have no effect. Key here is recommendations when you want 
to prefer heap instead of direct memory.

Let me list them in terms of relevancy for addressing this issue.

Highly relevant:
 - io.netty.allocator.cacheTrimInterval -> 
org.apache.hbase.thirdparty.io.netty.allocator.cacheTrimInterval
 -- This is the number of threshold of allocations when cached entries will be 
freed up if not frequently used. Lowering it from the default of 8192 may 
reduce the overall amount of direct memory retained in steady state, because 
the evaluation will be performed more often, as often as you specify.
 - io.netty.noPreferDirect -> 
org.apache.hbase.thirdparty.io.netty.noPreferDirect
 -- This will prefer heap arena allocations regardless of PlatformDependent 
ideas on preference if set to 'true'.
 - io.netty.allocator.numDirectArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numDirectArenas
 -- Various advice on the Internet suggests setting numDirectArenas=0 and 
noPreferDirect=true as the way to prefer heap based buffers.

Less relevant:
 - io.netty.allocator.maxCachedBufferCapacity -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxCachedBufferCapacity
 -- This is the sized based retention policy for buffers; individual buffers 
larger than this will not be cached.
 - io.netty.allocator.numHeapArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numHeapArenas
 - io.netty.allocator.pageSize -> 
org.apache.hbase.thirdparty.io.netty.allocator.pageSize
 - io.netty.allocator.maxOrder -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxOrder

On [https://github.com/apache/hbase/pull/4505] I have a draft PR that allows 
the user to tweak the Netty bytebuf allocation policy. This may be a good idea 
to do in general. We may want to provide support for some of the above Netty 
tunables in HBase site configuration as well, as a way to eliminate confusion 
about them... Our documentation on it would describe the HBase site config 
property names.

On a side note, we might spike on an alternative to SASL RPC that is a TLS 
based implementation instead. I know this has been discussed and even partially 
attempted, repeatedly, over our history but nonetheless the operational and 
performance issues with SASL remain. We were here once before on HBASE-17721. 
[~bbeaudreault]  posted HBASE-26548 more recently.


was (Author: apurtell):
[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with {_}auth{_}, which was the previous setting.

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly.

Looking at our 

[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552322#comment-17552322
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 5:07 PM:
-

[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with {_}auth{_}, which was the previous setting.

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly.

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. I can see in the 
sources, via my IDE, that the shader rewrote the string constants containing 
the property keys too. Various resources on the Internet will offer 
documentation and suggestions, but because we relocated Netty into thirdparty, 
the names have changed, and so naively following the advice on StackOverflow 
and other places will have no effect. Key here is recommendations when you want 
to prefer heap instead of direct memory.

Let me list them in terms of relevancy for addressing this issue.

Highly relevant:
 - io.netty.allocator.cacheTrimInterval -> 
org.apache.hbase.thirdparty.io.netty.allocator.cacheTrimInterval
 -- This is the number of threshold of allocations when cached entries will be 
freed up if not frequently used. Lowering it from the default of 8192 may 
reduce the overall amount of direct memory retained in steady state, because 
the evaluation will be performed more often, as often as you specify.
 - io.netty.noPreferDirect -> 
org.apache.hbase.thirdparty.io.netty.noPreferDirect
 -- This will prefer heap arena allocations regardless of PlatformDependent 
ideas on preference if set to 'true'.
 - io.netty.allocator.numDirectArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numDirectArenas
 -- Various advice on the Internet suggests setting numDirectArenas=0 and 
noPreferDirect=true as the way to prefer heap based buffers.

Less relevant:
 - io.netty.allocator.maxCachedBufferCapacity -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxCachedBufferCapacity
 -- This is the sized based retention policy for buffers; individual buffers 
larger than this will not be cached.
 - io.netty.allocator.numHeapArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numHeapArenas
 - io.netty.allocator.pageSize -> 
org.apache.hbase.thirdparty.io.netty.allocator.pageSize
 - io.netty.allocator.maxOrder -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxOrder

On [https://github.com/apache/hbase/pull/4505] I have a draft PR that allows 
the user to tweak the Netty bytebuf allocation policy. This may be a good idea 
to do in general. We may want to provide support for some of the above Netty 
tunables in HBase site configuration as well, as a way to eliminate confusion 
about them... Our documentation on it would describe the HBase site config 
property names.

On a side note, we might spike on an alternative to SASL RPC that is a TLS 
based implementation instead. I know this has been discussed and even partially 
attempted, repeatedly, over our history but nonetheless the operational and 
performance issues with SASL remain. We were here once before on HBASE-17721. 
[~bbeaudreault]  posted HBASE-26548 more recently.


was (Author: apurtell):
[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with {_}auth{_}, which was the previous setting.

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly.

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. I can see in the 
sources, via my IDE, that the shader rewrote the string constants containing 
the property keys too. Various 

[jira] [Updated] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-27103:

Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] apurtell merged pull request #4514: HBASE-27103 All MR UTs are broken because of ClassNotFound

2022-06-09 Thread GitBox


apurtell merged PR #4514:
URL: https://github.com/apache/hbase/pull/4514


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552324#comment-17552324
 ] 

Sean Busbey commented on HBASE-26983:
-

right, but the plugin should be grabbing its own dependency version:

{code}
[INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-core/9.2.13.0/jruby-core-9.2.13.0.jar
 (10 MB at 26 MB/s) [INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-stdlib/9.2.13.0/jruby-stdlib-9.2.13.0.jar
 (12 MB at 27 MB/s)
{code}

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552324#comment-17552324
 ] 

Sean Busbey edited comment on HBASE-26983 at 6/9/22 4:57 PM:
-

right, but the plugin should be grabbing its own dependency version:

{code}
[INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-core/9.2.13.0/jruby-core-9.2.13.0.jar
 (10 MB at 26 MB/s)
[INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-stdlib/9.2.13.0/jruby-stdlib-9.2.13.0.jar
 (12 MB at 27 MB/s)
{code}


was (Author: busbey):
right, but the plugin should be grabbing its own dependency version:

{code}
[INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-core/9.2.13.0/jruby-core-9.2.13.0.jar
 (10 MB at 26 MB/s) [INFO] Downloaded from central: 
https://repo.maven.apache.org/maven2/org/jruby/jruby-stdlib/9.2.13.0/jruby-stdlib-9.2.13.0.jar
 (12 MB at 27 MB/s)
{code}

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552322#comment-17552322
 ] 

Andrew Kyle Purtell edited comment on HBASE-26708 at 6/9/22 4:54 PM:
-

[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with {_}auth{_}, which was the previous setting.

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks – read on below – 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly.

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. I can see in the 
sources, via my IDE, that the shader rewrote the string constants containing 
the property keys too. Various resources on the Internet will offer 
documentation and suggestions, but because we relocated Netty into thirdparty, 
the names have changed, and so naively following the advice on StackOverflow 
and other places will have no effect. Key here is recommendations when you want 
to prefer heap instead of direct memory.

Let me list them in terms of relevancy for addressing this issue.

Highly relevant:
 - io.netty.allocator.cacheTrimInterval -> 
org.apache.hbase.thirdparty.io.netty.allocator.cacheTrimInterval
 -- This is the number of threshold of allocations when cached entries will be 
freed up if not frequently used. Lowering it from the default of 8192 may 
reduce the overall amount of direct memory retained in steady state, because 
the evaluation will be performed more often, as often as you specify.
 - io.netty.noPreferDirect -> 
org.apache.hbase.thirdparty.io.netty.noPreferDirect
 -- This will prefer heap arena allocations regardless of PlatformDependent 
ideas on preference if set to 'true'.
 - io.netty.allocator.numDirectArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numDirectArenas
 -- Various advice on the Internet suggests setting numDirectArenas=0 and 
noPreferDirect=true as the way to prefer heap based buffers.

Less relevant:
 - io.netty.allocator.maxCachedBufferCapacity -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxCachedBufferCapacity
 -- This is the sized based retention policy for buffers; individual buffers 
larger than this will not be cached.
 - io.netty.allocator.numHeapArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numHeapArenas
 - io.netty.allocator.pageSize -> 
org.apache.hbase.thirdparty.io.netty.allocator.pageSize
 - io.netty.allocator.maxOrder -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxOrder

On [https://github.com/apache/hbase/pull/4505] I have a draft PR that allows 
the user to tweak the Netty bytebuf allocation policy. This may be a good idea 
to do in general. We may want to provide support for some of the above Netty 
tunables in HBase site configuration as well, as a way to eliminate confusion 
about them... Our documentation on it would describe the HBase site config 
property names.

On a side note, we might spike on an alternative to SASL RPC that is a TLS 
based implementation instead. I know this has been discussed and even partially 
attempted, repeatedly, over our history but nonetheless the operational and 
performance issues with SASL remain.


was (Author: apurtell):
[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with _auth_, which was the previous setting. 

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks -- read on below -- 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly. 

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. Various 
resources on the Internet will offer documentation and suggestions, but because 
we relocated Netty into thirdparty, the names have changed, and so naively 
following the advice on StackOverflow and other places 

[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552322#comment-17552322
 ] 

Andrew Kyle Purtell commented on HBASE-26708:
-

[~zhangduo] Our current requirements would be _auth-conf_ but Viraj may have 
been testing with _auth_, which was the previous setting. 

[~vjasani] I am curious if you apply my patch and set 
hbase.netty.rpcserver.allocator=unpooled if the direct memory allocation still 
gets up to > 50 GB. My guess is yes, that it is the concurrent demand for 
buffers at load driving the usage, and not excessive cache retention in the 
pooled allocator. Let's see if experimental results confirm the hypothesis. If 
it helps then I am wrong and pooling configuration tweaks -- read on below -- 
should be considered. If I am correct then we should investigate how to get 
direct IO buffers freed faster and/or limits or pacing applied to their 
allocation; using a custom allocator, possibly. 

Looking at our PooledByteBufAllocator in hbase-thirdparty it is clear an issue 
people may be facing is confusion about system property names. Various 
resources on the Internet will offer documentation and suggestions, but because 
we relocated Netty into thirdparty, the names have changed, and so naively 
following the advice on StackOverflow and other places will have no effect. Key 
here is recommendations when you want to prefer heap instead of direct memory.

Let me list them in terms of relevancy for addressing this issue.

Highly relevant:
- io.netty.allocator.cacheTrimInterval -> 
org.apache.hbase.thirdparty.io.netty.allocator.cacheTrimInterval
-- This is the number of threshold of allocations when cached entries will be 
freed up if not frequently used. Lowering it from the default of 8192 may 
reduce the overall amount of direct memory retained in steady state, because 
the evaluation will be performed more often, as often as you specify.
- io.netty.noPreferDirect -> org.apache.hbase.thirdparty.io.netty.noPreferDirect
-- This will prefer heap arena allocations regardless of PlatformDependent 
ideas on preference if set to 'true'. 
- io.netty.allocator.numDirectArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numDirectArenas
-- Various advice on the Internet suggests setting numDirectArenas=0 and 
noPreferDirect=true as the way to prefer heap based buffers.

Less relevant:
- io.netty.allocator.maxCachedBufferCapacity -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxCachedBufferCapacity
-- This is the sized based retention policy for buffers; individual buffers 
larger than this will not be cached.
- io.netty.allocator.numHeapArenas -> 
org.apache.hbase.thirdparty.io.netty.allocator.numHeapArenas
- io.netty.allocator.pageSize -> 
org.apache.hbase.thirdparty.io.netty.allocator.pageSize
- io.netty.allocator.maxOrder -> 
org.apache.hbase.thirdparty.io.netty.allocator.maxOrder 

On https://github.com/apache/hbase/pull/4505 I have a draft PR that allows the 
user to tweak the Netty bytebuf allocation policy. This may be a good idea to 
do in general. We may want to provide support for some of the above Netty 
tunables in HBase site configuration as well, as a way to eliminate confusion 
about them... Our documentation on it would describe the HBase site config 
property names. 

On a side note, we might spike on an alternative to SASL RPC that is a TLS 
based implementation instead. I know this has been discussed and even partially 
attempted, repeatedly, over our history but nonetheless the operational and 
performance issues with SASL remain.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> 

[GitHub] [hbase] Apache-HBase commented on pull request #4418: HBASE-26969:Eliminate MOB renames when SFT is enabled

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4418:
URL: https://github.com/apache/hbase/pull/4418#issuecomment-1151369126

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 42s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   2m  6s |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | -1 :x: |  shadedjars  |   4m 50s |  patch has 44 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 239m 21s |  hbase-server in the patch failed.  |
   |  |   | 266m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4418 |
   | JIRA Issue | HBASE-26969 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1b542f831140 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | mvninstall | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk8-hadoop3-check/output/patch-mvninstall-root.txt
 |
   | shadedjars | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/testReport/
 |
   | Max. process+thread count | 2374 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4418/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4493: HBASE-27091 Speed up the loading of table descriptor from filesystem

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4493:
URL: https://github.com/apache/hbase/pull/4493#issuecomment-1151356252

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 230m  4s |  hbase-server in the patch failed.  |
   |  |   | 251m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4493 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8ea14b18989c 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/testReport/
 |
   | Max. process+thread count | 2887 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4493/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4511: Backport "HBASE-27095 HbckChore should produce a report" to branch-2.4

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4511:
URL: https://github.com/apache/hbase/pull/4511#issuecomment-1151355364

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  branch-2.4 passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  branch-2.4 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  branch-2.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 242m 26s |  hbase-server in the patch failed.  |
   |  |   | 265m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4511 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 846d64173e8f 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.4 / 3d82d2d9e7 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/testReport/
 |
   | Max. process+thread count | 2515 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552301#comment-17552301
 ] 

Nick Dimiduk commented on HBASE-26983:
--

https://github.com/asciidoctor/asciidoctor-maven-plugin/blob/main/pom.xml#L69

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552300#comment-17552300
 ] 

Nick Dimiduk commented on HBASE-26983:
--

It does, it really does :(

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-26983:
-

reopened to evaluate impact on site build

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552298#comment-17552298
 ] 

Sean Busbey commented on HBASE-26983:
-

huh. good catch. certainly looks guilty. let me go dig in.

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4487: Backport "HBASE-26366 Provide meaningful parent spans to ZK interactions" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4487:
URL: https://github.com/apache/hbase/pull/4487#issuecomment-1151343133

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   4m 28s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 49s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  branch-2 passed  |
   | -0 :warning: |  patch  |   0m 44s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  9s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 17s |  the patch passed  |
   | -0 :warning: |  javac  |   2m 34s |  hbase-server generated 1 new + 192 
unchanged - 1 fixed = 193 total (was 193)  |
   | -0 :warning: |  checkstyle  |   0m 34s |  hbase-server: The patch 
generated 1 new + 14 unchanged - 1 fixed = 15 total (was 15)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  10m 44s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotless  |   0m 46s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  46m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4487 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux e68b9595254d 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 
16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b3c9ef34b7 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 69 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4487/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4510: Backport "HBASE-27095 HbckChore should produce a report" to branch-2.5

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4510:
URL: https://github.com/apache/hbase/pull/4510#issuecomment-1151341790

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 45s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 25s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 47s |  the patch passed  |
   | -1 :x: |  shadedjars  |   6m 14s |  patch has 1 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 221m 39s |  hbase-server in the patch passed.  
|
   |  |   | 250m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4510 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5e7b3e95a777 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 7ead4b1617 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | shadedjars | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/testReport/
 |
   | Max. process+thread count | 2652 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26983) Upgrade JRuby to 9.3.4.0

2022-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552296#comment-17552296
 ] 

Nick Dimiduk commented on HBASE-26983:
--

Does asciidoc depend on jruby? It appears that site generation has been broken 
since this commit... https://ci-hbase.apache.org/job/hbase_generate_website/101/

> Upgrade JRuby to 9.3.4.0
> 
>
> Key: HBASE-26983
> URL: https://issues.apache.org/jira/browse/HBASE-26983
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-2, 2.4.11
> Environment: Apple M1 OSX ARM64.
>Reporter: Vijay Akkineni
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-3
>
>
> Hbase shell is failing to start on Apple M1 OSX ARM 64 processor architecture.
> *Error:*
> {code}
> Version 2.4.11, r7e672a0da0586e6b7449310815182695bc6ae193, Tue Mar 15 
> 10:31:00 PDT 2022
> Took 0.0010 seconds
> NotImplementedError: fstat unimplemented unsupported or native support failed 
> to load; see https://github.com/jruby/jruby/wiki/Native-Libraries
>   initialize at org/jruby/RubyIO.java:1015
>         open at org/jruby/RubyIO.java:1156
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
>   initialize at 
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:410
>   initialize at uri:classloader:/irb/hirb.rb:49
>        at classpath:/jar-bootstrap.rb:223
> {code}
>  
> {*}Uname output{*}:
> {code}
> Darwin vijays-mbp.lan 21.4.0 Darwin Kernel Version 21.4.0: Fri Mar 18 
> 00:46:32 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T6000 arm64
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4511: Backport "HBASE-27095 HbckChore should produce a report" to branch-2.4

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4511:
URL: https://github.com/apache/hbase/pull/4511#issuecomment-1151340513

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  branch-2.4 passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  branch-2.4 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 29s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 17s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 222m 53s |  hbase-server in the patch passed.  
|
   |  |   | 249m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4511 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 397fe28420a8 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.4 / 3d82d2d9e7 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/testReport/
 |
   | Max. process+thread count | 2747 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4511/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] ndimiduk commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


ndimiduk commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151336832

   Arg. I guess site generation is broken on master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-27103:

Status: Patch Available  (was: In Progress)

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4459: HBASE-26366 Provide meaningful parent spans to ZK interactions

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4459:
URL: https://github.com/apache/hbase/pull/4459#issuecomment-1151333055

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 10s |  master passed  |
   | +1 :green_heart: |  compile  |   5m 39s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  master passed  |
   | +1 :green_heart: |  spotless  |   1m  1s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  master passed  |
   | -0 :warning: |  patch  |   0m 51s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 30s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 30s |  hbase-server: The patch 
generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  16m 15s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 41s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4459 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux f75666d32a19 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 73 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4459/5/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151324098

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  master passed  |
   | -1 :x: |  refguide  |   0m 45s |  branch has 7 errors when building the 
reference guide.  |
   | +1 :green_heart: |  spotless  |   0m 46s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  0s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  refguide  |   0m 16s |  patch has 7 errors when building the 
reference guide.  |
   | +1 :green_heart: |  spotless  |   0m 44s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  14m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests | dupname asflicense checkstyle spotless xml refguide |
   | uname | Linux eb3b3b357893 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | refguide | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/artifact/yetus-general-check/output/branch-refguide.log
 |
   | refguide | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/artifact/yetus-general-check/output/patch-refguide.log
 |
   | Max. process+thread count | 69 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552282#comment-17552282
 ] 

Duo Zhang commented on HBASE-27103:
---

We already have this in our supplemental file.

{code}
  

  org.fusesource.leveldbjni
  leveldbjni-all

  


  BSD 3-Clause License
  http://www.opensource.org/licenses/BSD-3-Clause
  repo
  
Copyright (c) 2011 FuseSource Corp. All rights reserved.


  

  
{code}

So I think it is fine to also add openlabtesting to it. They have the same 
license.

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151315824

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests |  |
   | uname | Linux fbe9f905aefa 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Max. process+thread count | 39 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4513: HBASE-27102 Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4513:
URL: https://github.com/apache/hbase/pull/4513#issuecomment-1151311880

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4513 |
   | Optional Tests |  |
   | uname | Linux 2990fbf13ff2 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Max. process+thread count | 31 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4513/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4501: HBASE-26218 Add logs in Canary tool

2022-06-09 Thread GitBox


Apache9 commented on PR #4501:
URL: https://github.com/apache/hbase/pull/4501#issuecomment-1151308560

   Mind explaining a bit about your usage? Why you want to add these logs?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #4457: HBASE-27028 Add a shell command for flushing master local region

2022-06-09 Thread GitBox


Apache9 commented on code in PR #4457:
URL: https://github.com/apache/hbase/pull/4457#discussion_r893675901


##
hbase-server/src/main/java/org/apache/hadoop/hbase/master/region/MasterRegionFlusherAndCompactor.java:
##
@@ -263,6 +263,14 @@ void requestFlush() {
 }
   }
 
+  void resetChangesAfterLastFlush() {
+changesAfterLastFlush.set(0);
+  }
+
+  void resetLastFlushTime() {

Review Comment:
   Should name it recordLastFlushTime?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4510: Backport "HBASE-27095 HbckChore should produce a report" to branch-2.5

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4510:
URL: https://github.com/apache/hbase/pull/4510#issuecomment-1151304591

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 197m 28s |  hbase-server in the patch failed.  |
   |  |   | 216m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4510 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4399c315e8f4 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 7ead4b1617 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/testReport/
 |
   | Max. process+thread count | 2207 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4510/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552277#comment-17552277
 ] 

Andrew Kyle Purtell edited comment on HBASE-27103 at 6/9/22 3:46 PM:
-

To reproduce, check out branch-2.5, apply the patch from your PR, then do {{mvn 
clean install assembly:single -DskipTests -Dhadoop.profile=3.0 
-Dhadoop-three.version=3.3.3}} and this is the result:

{noformat}
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0:enforce 
(check-aggregate-license) on project hbase-assembly: Some Enforcer rules have 
failed. Look above for specific messages explaining why the rule failed. 
{noformat}

{noformat}
[INFO] --- maven-enforcer-plugin:3.0.0:enforce (check-aggregate-license) @ 
hbase-assembly ---
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
with message:
License errors detected, for more detail find ERROR in

/home/apurtell/src/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
{noformat}

And in the LICENSE file:
{noformat}
This product includes leveldbjni-all licensed under the The BSD 3-Clause 
License.

ERROR: Please check  this License for acceptability here:

https://www.apache.org/legal/resolved

If it is okay, then update the list named 'non_aggregate_fine' in the 
LICENSE.vm file.
If it isn't okay, then revert the change that added the dependency.

More info on the dependency:

org.openlabtesting.leveldbjni
leveldbjni-all
1.8

maven central search
g:org.openlabtesting.leveldbjni AND a:leveldbjni-all AND v:1.8

project website
http://leveldbjni.fusesource.org/leveldbjni-all
project source
https://github.com/theopenlab/leveldbjni/leveldbjni-all
--
{noformat}



was (Author: apurtell):
To reproduce, check out branch-2.5, apply the patch from your PR, then do {{ 
mvn clean install assembly:single -DskipTests -Dhadoop.profile=3.0 
-Dhadoop-three.version=3.3.3 }} and this is the result:

{noformat}
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0:enforce 
(check-aggregate-license) on project hbase-assembly: Some Enforcer rules have 
failed. Look above for specific messages explaining why the rule failed. 
{noformat}

{noformat}
[INFO] --- maven-enforcer-plugin:3.0.0:enforce (check-aggregate-license) @ 
hbase-assembly ---
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
with message:
License errors detected, for more detail find ERROR in

/home/apurtell/src/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
{noformat}

And in the LICENSE file:
{noformat}
This product includes leveldbjni-all licensed under the The BSD 3-Clause 
License.

ERROR: Please check  this License for acceptability here:

https://www.apache.org/legal/resolved

If it is okay, then update the list named 'non_aggregate_fine' in the 
LICENSE.vm file.
If it isn't okay, then revert the change that added the dependency.

More info on the dependency:

org.openlabtesting.leveldbjni
leveldbjni-all
1.8

maven central search
g:org.openlabtesting.leveldbjni AND a:leveldbjni-all AND v:1.8

project website
http://leveldbjni.fusesource.org/leveldbjni-all
project source
https://github.com/theopenlab/leveldbjni/leveldbjni-all
--
{noformat}


> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552277#comment-17552277
 ] 

Andrew Kyle Purtell commented on HBASE-27103:
-

To reproduce, check out branch-2.5, apply the patch from your PR, then do {{ 
mvn clean install assembly:single -DskipTests -Dhadoop.profile=3.0 
-Dhadoop-three.version=3.3.3 }} and this is the result:

{noformat}
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0:enforce 
(check-aggregate-license) on project hbase-assembly: Some Enforcer rules have 
failed. Look above for specific messages explaining why the rule failed. 
{noformat}

{noformat}
[INFO] --- maven-enforcer-plugin:3.0.0:enforce (check-aggregate-license) @ 
hbase-assembly ---
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
with message:
License errors detected, for more detail find ERROR in

/home/apurtell/src/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
{noformat}

And in the LICENSE file:
{noformat}
This product includes leveldbjni-all licensed under the The BSD 3-Clause 
License.

ERROR: Please check  this License for acceptability here:

https://www.apache.org/legal/resolved

If it is okay, then update the list named 'non_aggregate_fine' in the 
LICENSE.vm file.
If it isn't okay, then revert the change that added the dependency.

More info on the dependency:

org.openlabtesting.leveldbjni
leveldbjni-all
1.8

maven central search
g:org.openlabtesting.leveldbjni AND a:leveldbjni-all AND v:1.8

project website
http://leveldbjni.fusesource.org/leveldbjni-all
project source
https://github.com/theopenlab/leveldbjni/leveldbjni-all
--
{noformat}


> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4498: HBASE-27095 HbckChore should produce a report

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4498:
URL: https://github.com/apache/hbase/pull/4498#issuecomment-1151287864

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 231m 15s |  hbase-server in the patch failed.  |
   |  |   | 249m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4498 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 84e304e4fdc2 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 
16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/testReport/
 |
   | Max. process+thread count | 2203 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4509: Backport "HBASE-27095 HbckChore should produce a report" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4509:
URL: https://github.com/apache/hbase/pull/4509#issuecomment-1151284848

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 57s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 14s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 13s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m 27s |  hbase-server in the patch passed.  
|
   |  |   | 216m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4509/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4509 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2c0f3a54ab32 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b3c9ef34b7 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4509/1/testReport/
 |
   | Max. process+thread count | 2398 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4509/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552272#comment-17552272
 ] 

Duo Zhang commented on HBASE-27103:
---

OK, thanks [~apurtell] for taking this. Let me close my incompleted PR~

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache9 closed pull request #4512: HBASE-27103 All MR UTs are broken because of ClassNotFound

2022-06-09 Thread GitBox


Apache9 closed pull request #4512: HBASE-27103 All MR UTs are broken because of 
ClassNotFound
URL: https://github.com/apache/hbase/pull/4512


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-27105) HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout

2022-06-09 Thread Pankaj Kumar (Jira)
Pankaj Kumar created HBASE-27105:


 Summary: HBaseInterClusterReplicationEndpoint should honor 
replication adaptive timeout
 Key: HBASE-27105
 URL: https://issues.apache.org/jira/browse/HBASE-27105
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar


HBASE-23293 introduced replication.source.shipedits.timeout which is adaptive, 
ReplicationSourceShipper#shipEdits() set the adaptive timeout based on retries. 
But on CallTimeoutException in 
HBaseInterClusterReplicationEndpoint#replicate(), it keep retrying the 
replication after sleep with same timeout value.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell reassigned HBASE-27103:
---

Assignee: Andrew Kyle Purtell  (was: Duo Zhang)

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552270#comment-17552270
 ] 

Andrew Kyle Purtell commented on HBASE-27103:
-

Let me take this [~zhangduo]. All we need is a supplemental model. If we need 
this dependency then we can add it. 

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27103) All MR UTs are broken because of ClassNotFound

2022-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-27103:

Fix Version/s: 2.5.0
   3.0.0-alpha-3

> All MR UTs are broken because of ClassNotFound
> --
>
> Key: HBASE-27103
> URL: https://issues.apache.org/jira/browse/HBASE-27103
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop3, test
>Reporter: Duo Zhang
>Assignee: Andrew Kyle Purtell
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-3
>
>
> It seems we must include leveldbjni-all when starting a MiniYARNCluster.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27104) Add a tool command list_unknownservers

2022-06-09 Thread LiangJun He (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiangJun He updated HBASE-27104:

Summary: Add a tool command list_unknownservers  (was: Add a tool command 
list_unknowservers)

> Add a tool command list_unknownservers
> --
>
> Key: HBASE-27104
> URL: https://issues.apache.org/jira/browse/HBASE-27104
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Affects Versions: 3.0.0-alpha-3
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Major
> Fix For: 3.0.0-alpha-3
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4507: Backport "HBASE-27066 The Region Visualizer display failed" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4507:
URL: https://github.com/apache/hbase/pull/4507#issuecomment-1151247622

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 16s |  branch-2 passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 234m  9s |  hbase-server in the patch passed.  
|
   |  |   | 244m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4507 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 546f6c336508 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 63a35facad |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/testReport/
 |
   | Max. process+thread count | 2660 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (HBASE-27104) Add a tool command list_unknowservers

2022-06-09 Thread LiangJun He (Jira)
LiangJun He created HBASE-27104:
---

 Summary: Add a tool command list_unknowservers
 Key: HBASE-27104
 URL: https://issues.apache.org/jira/browse/HBASE-27104
 Project: HBase
  Issue Type: New Feature
  Components: master
Affects Versions: 3.0.0-alpha-3
Reporter: LiangJun He
Assignee: LiangJun He
 Fix For: 3.0.0-alpha-3






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [hbase] Apache-HBase commented on pull request #4508: Backport "HBASE-27066 The Region Visualizer display failed" to branch-2.5

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4508:
URL: https://github.com/apache/hbase/pull/4508#issuecomment-1151235384

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 53s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 30s |  branch-2.5 passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 223m 24s |  hbase-server in the patch passed.  
|
   |  |   | 235m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4508 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux dbb51d571a0a 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 5cc614f23f |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/testReport/
 |
   | Max. process+thread count | 2555 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4498: HBASE-27095 HbckChore should produce a report

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4498:
URL: https://github.com/apache/hbase/pull/4498#issuecomment-1151232778

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 193m 55s |  hbase-server in the patch passed.  
|
   |  |   | 213m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4498 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 873f363dcf9c 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 
11 12:03:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/testReport/
 |
   | Max. process+thread count | 2516 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4498/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4502: HBASE-27094 Encryption data contains checksum

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4502:
URL: https://github.com/apache/hbase/pull/4502#issuecomment-1151229423

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  1s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m  9s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 199m 15s |  hbase-server in the patch passed.  
|
   |  |   | 218m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4502/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4502 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1486a562ba9e 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 414cfb30f6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4502/4/testReport/
 |
   | Max. process+thread count | 2430 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4502/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4507: Backport "HBASE-27066 The Region Visualizer display failed" to branch-2

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4507:
URL: https://github.com/apache/hbase/pull/4507#issuecomment-1151192334

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 58s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 48s |  branch-2 passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 194m 22s |  hbase-server in the patch passed.  
|
   |  |   | 203m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4507 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 02885bd47e5c 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 63a35facad |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/testReport/
 |
   | Max. process+thread count | 2334 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4507/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4490: HBASE-27086 Fix graceful_stop cannot take previous balancer status by incompatibility of hbase shell prompt

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4490:
URL: https://github.com/apache/hbase/pull/4490#issuecomment-1151177365

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 16s |  master passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 251m 18s |  root in the patch passed.  |
   |  |   | 265m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4490/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4490 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 2d5d3a07d442 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 9342653691 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4490/2/testReport/
 |
   | Max. process+thread count | 3799 (vs. ulimit of 3) |
   | modules | C: hbase-shell . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4490/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4508: Backport "HBASE-27066 The Region Visualizer display failed" to branch-2.5

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4508:
URL: https://github.com/apache/hbase/pull/4508#issuecomment-1151176844

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 16s |  branch-2.5 passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  0s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 180m  9s |  hbase-server in the patch failed.  |
   |  |   | 190m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4508 |
   | Optional Tests | javac javadoc unit |
   | uname | Linux 880d742ae413 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 
11 12:03:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / 5cc614f23f |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/testReport/
 |
   | Max. process+thread count | 2557 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4508/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] wenwj0 commented on pull request #4506: HBASE-27101 support commons-crypto version 1.1.0

2022-06-09 Thread GitBox


wenwj0 commented on PR #4506:
URL: https://github.com/apache/hbase/pull/4506#issuecomment-1151165394

   Can someone tell me why it fails in mvninstall, i only changed one place.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4506: HBASE-27101 support commons-crypto version 1.1.0

2022-06-09 Thread GitBox


Apache-HBase commented on PR #4506:
URL: https://github.com/apache/hbase/pull/4506#issuecomment-1151151477

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 338m 42s |  root in the patch passed.  |
   |  |   | 363m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4506/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4506 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 91f5b2384b0d 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 9342653691 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4506/1/testReport/
 |
   | Max. process+thread count | 3920 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4506/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27102) Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-27102:
-
Assignee: Nick Dimiduk
  Status: Patch Available  (was: Open)

> Vacate the .idea folder in order to simplify spotless configuration
> ---
>
> Key: HBASE-27102
> URL: https://issues.apache.org/jira/browse/HBASE-27102
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.4.12, 3.0.0-alpha-2, 2.5.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
>
> Per discussion on HBASE-27096, spotless configuration is a bit ham-handed. We 
> can simplify its maintenance by ignoring entirely the {{.idea}} directory. 
> Since committing project files there is not workable at the moment, the only 
> functionality present there is {{checkstyle-idea.xml}}. Let's move that to 
> {{dev-support}} and update the book accordingly.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27102) Vacate the .idea folder in order to simplify spotless configuration

2022-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17552228#comment-17552228
 ] 

Nick Dimiduk commented on HBASE-27102:
--

Nah, just delete it.

> Vacate the .idea folder in order to simplify spotless configuration
> ---
>
> Key: HBASE-27102
> URL: https://issues.apache.org/jira/browse/HBASE-27102
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.4.12
>Reporter: Nick Dimiduk
>Priority: Major
>
> Per discussion on HBASE-27096, spotless configuration is a bit ham-handed. We 
> can simplify its maintenance by ignoring entirely the {{.idea}} directory. 
> Since committing project files there is not workable at the moment, the only 
> functionality present there is {{checkstyle-idea.xml}}. Let's move that to 
> {{dev-support}} and update the book accordingly.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   >