[jira] [Assigned] (HBASE-23940) Dynamically turn on RingBuffer for slow/large RPC logs

2022-06-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HBASE-23940:


Assignee: (was: Viraj Jasani)

> Dynamically turn on RingBuffer for slow/large RPC logs
> --
>
> Key: HBASE-23940
> URL: https://issues.apache.org/jira/browse/HBASE-23940
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Viraj Jasani
>Priority: Major
>
> Make hbase.regionserver.slowlog.buffer.enabled dynamically configurable and 
> accordingly turn on/off RingBuffer in RegionServers to start/stop storing 
> complete slow RPC logs for operator to retrieve.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-25709) Close region may stuck when region is compacting and skipped most cells read

2022-06-14 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17554336#comment-17554336
 ] 

Viraj Jasani commented on HBASE-25709:
--

[~Xiaolin Ha] Thanks for providing further resolution. I am quite occupied this 
week, if still not reviewed, let me take a look next week. Thanks!

[~bbeaudreault] At high level, we can say that if the rows are quite large, and 
if the row also has delete markers as well, they are also returned by the scan. 
The patch I added in my previous comment would help understand at low level but 
that patch is applicable on the test that is now reverted with [this 
commit|https://github.com/apache/hbase/commit/5e34cdf1ef914b7c5d60df0edebd2f32ba543d02].
 Basically the repro can be done by reducing 
HBASE_CELLS_SCANNED_PER_HEARTBEAT_CHECK in the test.

> Close region may stuck when region is compacting and skipped most cells read
> 
>
> Key: HBASE-25709
> URL: https://issues.apache.org/jira/browse/HBASE-25709
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 1.7.1, 3.0.0-alpha-2, 2.4.10
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.5.0, 2.6.0, 2.4.11, 3.0.0-alpha-4
>
> Attachments: Master-UI-RIT.png, RS-region-state.png
>
>
> We found in our cluster about stop region stuck. The region is compacting, 
> and its store files has many TTL expired cells. Close region state 
> marker(HRegion#writestate.writesEnabled) is not checked in compaction, 
> because most cells were skipped. 
> !RS-region-state.png|width=698,height=310!
>  
> !Master-UI-RIT.png|width=693,height=157!
>  
> HBASE-23968 has encountered similar problem, but the solution in it is outer 
> the method
> InternalScanner#next(List result, ScannerContext scannerContext), which 
> will not return if there are many skipped cells, for current compaction 
> scanner context. As a result, we need to return in time in the next method, 
> and then check the stop marker.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555233#comment-17555233
 ] 

Viraj Jasani commented on HBASE-26708:
--

The current env is using auth-conf: 
{code:java}
security.HBaseSaslRpcClient - SASL client context established. Negotiated QoP: 
auth-conf{code}

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.th

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555240#comment-17555240
 ] 

Viraj Jasani commented on HBASE-27097:
--

When we use BlockingRpcClient or NettyRpcClient against SimpleRpcServer in 
hbase2 (with SaslAuth, Negotiated QoP: auth-conf), hbase shell commands throw 
checksum failed errors:
{code:java}
hbase:010:0> rit


ERROR: Checksum failed
 {code}

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.13
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> SimpleRpcServer. That said, this is just a superficial analysis.
> 1) "Protocol message end-group tag did not match expected tag"
> {noformat}
>  2022-06-07T16:44:04,625 WARN  
> [Reader=5,bindAddress=buildbox.localdomain,port=8120] ipc.RpcServer: 
> /127.0.1.1:8120 is unable to read call parameter from client 127.0.0.1
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Protocol message end-group tag did not match expected tag.
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:129)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.checkLastTagWas(CodedInputStream.java:4034)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4275)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10520)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue.(ClientProtos.java:10464)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12251)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:12245)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream$ByteInputDecoder.readMessage(CodedInputStream.java:4274)
>  ~[hbase-shaded-protobuf-4.1.0.jar:4.1.0]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9981)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:9910)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14097)
>  ~[hbase-protocol-shaded-2.5.1-SNAPSHOT.jar:2.5.1-SNAPSHOT]
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:14091)
>  ~[hbase-protocol

[jira] [Comment Edited] (HBASE-27097) SimpleRpcServer is broken

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555240#comment-17555240
 ] 

Viraj Jasani edited comment on HBASE-27097 at 6/16/22 6:33 PM:
---

When we use BlockingRpcClient or NettyRpcClient against SimpleRpcServer in 
hbase2 (with SaslAuth, Negotiated QoP: auth-conf), hbase shell commands throw 
checksum failed errors:
{code:java}
hbase:010:0> rit


ERROR: Checksum failed
 {code}
 

Similarly, scanning of SYSTEM.CATALOG in Phoenix also fails:
{code:java}
Caused by: java.net.SocketTimeoutException: callTimeout=6, 
callDuration=68788: Call to address=regionserver-1:60020 failed on local 
exception: javax.security.sasl.SaslException: Problems unwrapping SASL buffer 
[Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: 
Could not use AES128 Cipher - Checksum failed)] row '' on table 
'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1651182322114.9706a466ac24135ce93769671b601652., 
hostname=regionserver-1,60020,1655397068266, seqNum=256
    at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156)
    at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:74)
    ... 3 more
Caused by: javax.security.sasl.SaslException: Call to 
address=regionserver-1:60020 failed on local exception: 
javax.security.sasl.SaslException: Problems unwrapping SASL buffer [Caused by 
GSSException: Failure unspecified at GSS-API level (Mechanism level: Could not 
use AES128 Cipher - Checksum failed)] [Caused by 
javax.security.sasl.SaslException: Problems unwrapping SASL buffer [Caused by 
GSSException: Failure unspecified at GSS-API level (Mechanism level: Could not 
use AES128 Cipher - Checksum failed)]]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:240)
    at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385)
    at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:89)
    at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:417)
    at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:413)
    at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115)
    at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130)
    at 
org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.cleanupCalls(NettyRpcDuplexHandler.java:203)
    at 
org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.exceptionCaught(NettyRpcDuplexHandler.java:220)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
    at 
org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:143)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:381)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
    at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:314)
    at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:435)
    at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at 
org.apache.hbase.thirdpa

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555264#comment-17555264
 ] 

Viraj Jasani commented on HBASE-27097:
--

Scanning of hbase:meta is very slow and it keeps throwing this after a while:
{code:java}
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
9036221070314570992 number_of_rows: 100 close_scanner: false next_call_seq: 0 
client_handles_partials: true client_handles_heartbeats: true 
track_scan_metrics: false renew: false
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.checkScanNextCallSeq(RSRpcServices.java:3197)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3549)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45819)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:384)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:371)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:351)


at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:378)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:366)
at 
org.apache.hadoop.hbase.client.ScannerCallable.next(ScannerCallable.java:194)
at 
org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:258)
at 
org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:124)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:189)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:393)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:367)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:74)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException):
 org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
9036221070314570992 number_of_rows: 100 close_scanner: false next_call_seq: 0 
client_handles_partials: true client_handles_heartbeats: true 
track_scan_metrics: false renew: false
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.checkScanNextCallSeq(RSRpcServices.java:3197)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3549)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45819)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:384)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:371)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:351)


at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:89)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:417)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:413)
at org.

[jira] [Commented] (HBASE-27097) SimpleRpcServer is broken

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555270#comment-17555270
 ] 

Viraj Jasani commented on HBASE-27097:
--

On the other hand, if we don't change client impl (i.e. keep default 
NettyRpcClient) against SimpleRpcServer (with same auth as above), CJ keeps 
failing with:

 
{code:java}
2022-06-16 17:15:31,890 WARN  [,queue=15,port=60020] ipc.RpcServer - 
RpcServer.default.FPBQ.Fifo.handler=249,queue=15,port=60020: caught: 
org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException: 
refCnt: 0, decrement: 1
    at 
org.apache.hbase.thirdparty.io.netty.util.internal.ReferenceCountUpdater.toLiveRealRefCnt(ReferenceCountUpdater.java:74)
    at 
org.apache.hbase.thirdparty.io.netty.util.internal.ReferenceCountUpdater.release(ReferenceCountUpdater.java:138)
    at 
org.apache.hbase.thirdparty.io.netty.util.AbstractReferenceCounted.release(AbstractReferenceCounted.java:76)
    at org.apache.hadoop.hbase.nio.ByteBuff.release(ByteBuff.java:77)
    at org.apache.hadoop.hbase.ipc.ServerCall.cleanup(ServerCall.java:165)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:162)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:371)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:351)



2022-06-16 17:14:19,743 WARN  [RpcServer.responder] ipc.RpcServer - 
RpcServer.responder: exception in Responder java.lang.NullPointerException
    at org.apache.hadoop.hbase.ipc.ServerCall.wrapWithSasl(ServerCall.java:401)
    at org.apache.hadoop.hbase.ipc.ServerCall.getResponse(ServerCall.java:548)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.processResponse(SimpleRpcServerResponder.java:226)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.processAllResponses(SimpleRpcServerResponder.java:270)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.doAsyncWrite(SimpleRpcServerResponder.java:203)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.doRunLoop(SimpleRpcServerResponder.java:122)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.run(SimpleRpcServerResponder.java:58)java.lang.NullPointerException
    at org.apache.hadoop.hbase.ipc.ServerCall.wrapWithSasl(ServerCall.java:401)
    at org.apache.hadoop.hbase.ipc.ServerCall.getResponse(ServerCall.java:548)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.processResponse(SimpleRpcServerResponder.java:226)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.processAllResponses(SimpleRpcServerResponder.java:270)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.doAsyncWrite(SimpleRpcServerResponder.java:203)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.doRunLoop(SimpleRpcServerResponder.java:122)
    at 
org.apache.hadoop.hbase.ipc.SimpleRpcServerResponder.run(SimpleRpcServerResponder.java:58)



2022-06-16 17:34:19,921 WARN  [ster-2:6.Chore.1] janitor.CatalogJanitor - 
Failed janitorial scan of hbase:meta table
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=46, exceptions: {code}

> SimpleRpcServer is broken
> -
>
> Key: HBASE-27097
> URL: https://issues.apache.org/jira/browse/HBASE-27097
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Priority: Blocker
> Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.13
>
> Attachments: MultiByteBuff.patch
>
>
> Concerns about SimpleRpcServer are not new, and not new to 2.5.  @chenxu 
> noticed a problem on HBASE-23917 back in 2020. After some simple evaluations 
> it seems quite broken. 
> When I run an async version of ITLCC against a 2.5.0 cluster configured with 
> hbase.rpc.server.impl=SimpleRpcServer, the client almost immediately stalls 
> because there are too many in flight requests. The logic to pause with too 
> many in flight requests is my own. That's not important. Looking at the 
> server logs it is apparent that SimpleRpcServer is quite broken. Handlers 
> suffer frequent protobuf parse errors and do not properly return responses to 
> the client. This is what stalls my test client. Rather quickly all available 
> request slots are full of requests that will have to time out on the client 
> side. 
> Exceptions have three patterns but they all have in common 
> SimpleServerRpcConnection#process. It seems likely the root cause is 
> mismatched expectations or bugs in connection buffer handling in 
> SimpleRpcServer/SimpleServerRpcConnection versus downstream classes that 
> process and parse the buffers. It also seems likely that changes were made to 
> downstream classes like ServerRpcConnection expecting NettyRpcServer's 
> particulars without updating SimpleServerRpcConnection and/or 
> S

[jira] [Commented] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555277#comment-17555277
 ] 

Viraj Jasani commented on HBASE-27112:
--

{quote}there are diminishing returns when threads > cores so a reasonable 
default here could be Runtime.getRuntime().availableProcessors() instead of 
unbounded?
{quote}
+1 for this one [~apurtell] 

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 
> because Netty has been optimized to prefer concurrency.
> Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
> (unbounded). I don't know what it can actually get up to in production, 
> because we lack the metric, but there are diminishing returns when threads > 
> cores so a reasonable default here could be 
> Runtime.getRuntime().availableProcessors() instead of unbounded?
> maxPendingTasks probably should not be INT_MAX, but that may matter less.
> The tasks here are:
> - Instrument netty level resources to understand better actual resource 
> allocations under load. Investigate what we need to plug in where to gain 
> visibility. 
> - Where instrumentation designed for this issue can be implemented as low 
> overhead metrics, consider formally adding them as a metric. 
> - Based on the findings from this instrumentation, consider and implement 
> next steps. The goal would be to limit concurrency at the Netty layer in such 
> a way that performance is still good, and under load we don't balloon 
> resource usage at the Netty layer.
> If the instrumentation and experimental results indicate no changes are 
> necessary, we can close this as Not A Problem or WontFix. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555278#comment-17555278
 ] 

Viraj Jasani commented on HBASE-27112:
--

When "hbase.netty.eventloop.rpcserver.thread.count" is 0, it uses 
_*NettyRuntime.availableProcessors() * 2*_ by default:
{code:java}
static {
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"org.apache.hbase.thirdparty.io.netty.eventLoopThreads", 
NettyRuntime.availableProcessors() * 2));

if (logger.isDebugEnabled()) {
logger.debug("-Dio.netty.eventLoopThreads: {}", 
DEFAULT_EVENT_LOOP_THREADS);
}
}
 {code}
{code:java}
/**
 * Get the configured number of available processors. The default is {@link 
Runtime#availableProcessors()}. This
 * can be overridden by setting the system property 
"org.apache.hbase.thirdparty.io.netty.availableProcessors" or by invoking
 * {@link #setAvailableProcessors(int)} before any calls to this method.
 *
 * @return the configured number of available processors
 */
public static int availableProcessors() {
return holder.availableProcessors();
}
 {code}

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 
> because Netty has been optimized to prefer concurrency.
> Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
> (unbounded). I don't know what it can actually get up to in production, 
> because we lack the metric, but there are diminishing returns when threads > 
> cores so a reasonable default here could be 
> Runtime.getRuntime().availableProcessors() instead of unbounded?
> maxPendingTasks probably should not be INT_MAX, but that may matter less.
> The tasks here are:
> - Instrument netty level resources to understand better actual resource 
> allocations under load. Investigate what we need to plug in where to gain 
> visibility. 
> - Where instrumentation designed for this issue can be implemented as low 
> overhead metrics, consider formally adding them as a metric. 
> - Based on the findings from this instrumentation, consider and implement 
> next steps. The goal would be to limit concurrency at the Netty layer in such 
> a way that performance is still good, and under load we don't balloon 
> resource usage at the Netty layer.
> If the instrumentation and experimental results indicate no changes are 
> necessary, we can close this as Not A Problem or WontFix. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555293#comment-17555293
 ] 

Viraj Jasani commented on HBASE-27112:
--

I think what [~zhangduo] mentioned on this comment might be worth a try, 
wondering if this can be quickly tested (might need Duo's help):
{quote}Actually, when auth-int or auth-conf is used, we will copy the bytes 
from netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java]
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java]
{quote}
 

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 
> because Netty has been optimized to prefer concurrency.
> Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
> (unbounded). I don't know what it can actually get up to in production, 
> because we lack the metric, but there are diminishing returns when threads > 
> cores so a reasonable default here could be 
> Runtime.getRuntime().availableProcessors() instead of unbounded?
> maxPendingTasks probably should not be INT_MAX, but that may matter less.
> The tasks here are:
> - Instrument netty level resources to understand better actual resource 
> allocations under load. Investigate what we need to plug in where to gain 
> visibility. 
> - Where instrumentation designed for this issue can be implemented as low 
> overhead metrics, consider formally adding them as a metric. 
> - Based on the findings from this instrumentation, consider and implement 
> next steps. The goal would be to limit concurrency at the Netty layer in such 
> a way that performance is still good, and under load we don't balloon 
> resource usage at the Netty layer.
> If the instrumentation and experimental results indicate no changes are 
> necessary, we can close this as Not A Problem or WontFix. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555293#comment-17555293
 ] 

Viraj Jasani edited comment on HBASE-27112 at 6/16/22 8:58 PM:
---

I think what [~zhangduo] mentioned on this comment might be worth a try, 
wondering if this can be quickly tested (might need Duo's help):

https://issues.apache.org/jira/browse/HBASE-26708?focusedCommentId=17552505&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17552505
{quote}Actually, when auth-int or auth-conf is used, we will copy the bytes 
from netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java]
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java]
{quote}
 


was (Author: vjasani):
I think what [~zhangduo] mentioned on this comment might be worth a try, 
wondering if this can be quickly tested (might need Duo's help):
{quote}Actually, when auth-int or auth-conf is used, we will copy the bytes 
from netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java]
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java]
{quote}
 

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 
> because Netty has been optimized to prefer concurrency.
> Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
> (unbounded). I don't know what it can actually get up to in production, 
> because we lack the metric, but there are diminishing returns when threads > 
> cores so a reasonable default here could be 
> Runtime.getRuntime().availableProcessors() instead of unbounded?
> maxPendingTasks probably should not be INT_MAX, but that may matter less.
> The tasks here are:
> - Instrument netty level resources to understand better actual resource 
> allocations under load. Investigate what we need to plug in where to gain 
> visibility. 
> - Where instrumentation designed for this issue can be implemented as low 
> overhead metrics, consider formally adding them as a metric. 
> - Based on the findings from this instrumentation, consider and implement 
> next steps. The goal would be to limit concurrency at the Netty layer in such 
> a way that performance is still good, and under load we don't balloon 
> resource usage at the Netty layer.
> If the instrumentation and experimental results indicate no changes are 
> necessary, we can close this as Not A Problem or WontFix. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555293#comment-17555293
 ] 

Viraj Jasani edited comment on HBASE-27112 at 6/16/22 9:02 PM:
---

I think what [~zhangduo] mentioned on this comment might be worth a try, 
wondering if this can be quickly tested (might need Duo's help):

https://issues.apache.org/jira/browse/HBASE-26708?focusedCommentId=17552505&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17552505
{quote}Actually, when auth-int or auth-conf is used, we will copy the bytes 
from netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java]
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java]
{quote}
 

This is where we check if for connection header, we need to negotiate with 
server:
{code:java}
// check if negotiate with server for connection header is necessary
if (saslHandler.isNeedProcessConnectionHeader()) {
  Promise connectionHeaderPromise = ch.eventLoop().newPromise();
  // create the handler to handle the connection header
  ChannelHandler chHandler = new NettyHBaseRpcConnectionHeaderHandler(
connectionHeaderPromise, conf, connectionHeaderWithLength);

  // add ReadTimeoutHandler to deal with server doesn't response connection 
header
  // because of the different configuration in client side and server side
  p.addFirst(
new ReadTimeoutHandler(RpcClient.DEFAULT_SOCKET_TIMEOUT_READ, 
TimeUnit.MILLISECONDS));
  p.addLast(chHandler);
  connectionHeaderPromise.addListener(new FutureListener() {
@Override
public void operationComplete(Future future) throws Exception {
  if (future.isSuccess()) {
ChannelPipeline p = ch.pipeline();
p.remove(ReadTimeoutHandler.class);
p.remove(NettyHBaseRpcConnectionHeaderHandler.class);
// don't send connection header, NettyHbaseRpcConnectionHeaderHandler
// sent it already
established(ch);
  } else {
final Throwable error = future.cause();
scheduleRelogin(error);
failInit(ch, toIOE(error));
  }
}
  });
} {code}
 


was (Author: vjasani):
I think what [~zhangduo] mentioned on this comment might be worth a try, 
wondering if this can be quickly tested (might need Duo's help):

https://issues.apache.org/jira/browse/HBASE-26708?focusedCommentId=17552505&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17552505
{quote}Actually, when auth-int or auth-conf is used, we will copy the bytes 
from netty's BB to on heap byte array, wrap or unwrap it, and then just 
Unpool.wrappedBuffer to pass the on heap byte array to later handlers. In this 
way, actually we can release netty's native byte buf earlier...

[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslWrapHandler.java]
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUnwrapHandler.java]
{quote}
 

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 

[jira] [Commented] (HBASE-27112) Investigate Netty resource usage limits

2022-06-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555322#comment-17555322
 ] 

Viraj Jasani commented on HBASE-27112:
--

Sounds good.

> Investigate Netty resource usage limits
> ---
>
> Key: HBASE-27112
> URL: https://issues.apache.org/jira/browse/HBASE-27112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.5.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> We leave Netty level resource limits unbounded. The number of threads to use 
> for the event loop is default 0 (unbounded). The default for 
> io.netty.eventLoop.maxPendingTasks is INT_MAX. 
> We don't do that for our own RPC handlers. We have a notion of maximum 
> handler pool size, with a default of 30, typically raised in production by 
> the user. We constrain the depth of the request queue in multiple ways... 
> limits on number of queued calls, limits on total size of calls data that can 
> be queued (to avoid memory usage overrun, CoDel conditioning of the call 
> queues if it is enabled, and so on.
> Under load can we pile up a excess of pending request state, such as direct 
> buffers containing request bytes, at the netty layer because of downstream 
> resource limits? Those limits will act as a bottleneck, as intended, and 
> before would have also applied backpressure through RPC too, because 
> SimpleRpcServer had thread limits ("hbase.ipc.server.read.threadpool.size", 
> default 10), but Netty may be able to queue up a lot more, in comparison, 
> because Netty has been optimized to prefer concurrency.
> Consider the hbase.netty.eventloop.rpcserver.thread.count default. It is 0 
> (unbounded). I don't know what it can actually get up to in production, 
> because we lack the metric, but there are diminishing returns when threads > 
> cores so a reasonable default here could be 
> Runtime.getRuntime().availableProcessors() instead of unbounded?
> maxPendingTasks probably should not be INT_MAX, but that may matter less.
> The tasks here are:
> - Instrument netty level resources to understand better actual resource 
> allocations under load. Investigate what we need to plug in where to gain 
> visibility. 
> - Where instrumentation designed for this issue can be implemented as low 
> overhead metrics, consider formally adding them as a metric. 
> - Based on the findings from this instrumentation, consider and implement 
> next steps. The goal would be to limit concurrency at the Netty layer in such 
> a way that performance is still good, and under load we don't balloon 
> resource usage at the Netty layer.
> If the instrumentation and experimental results indicate no changes are 
> necessary, we can close this as Not A Problem or WontFix. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (HBASE-27117) Update the method comments for RegionServerAccounting

2022-06-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-27117.
--
Fix Version/s: 2.5.0
   3.0.0-alpha-3
   2.4.13
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update the method comments for RegionServerAccounting
> -
>
> Key: HBASE-27117
> URL: https://issues.apache.org/jira/browse/HBASE-27117
> Project: HBase
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.13
>
>
> After HBASE-15787, the return value type of 
> RegionServerAccounting#isAboveHighWaterMark and 
> RegionServerAccounting#isAboveLowWaterMark are no longer boolean.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27141) Upgrade hbase-thirdparty dependency to 4.1.1

2022-06-20 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17556529#comment-17556529
 ] 

Viraj Jasani commented on HBASE-27141:
--

Thanks [~zhangduo] for this work and for including 2.4.13 as well.

FYI [~apurtell] 

> Upgrade hbase-thirdparty dependency to 4.1.1
> 
>
> Key: HBASE-27141
> URL: https://issues.apache.org/jira/browse/HBASE-27141
> Project: HBase
>  Issue Type: Task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 2.4.13, 3.0.0-alpha-4
>
>
> So we can upgrade jackson to 2.13.3



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27141) Upgrade hbase-thirdparty dependency to 4.1.1

2022-06-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-27141:
-
Priority: Critical  (was: Major)

> Upgrade hbase-thirdparty dependency to 4.1.1
> 
>
> Key: HBASE-27141
> URL: https://issues.apache.org/jira/browse/HBASE-27141
> Project: HBase
>  Issue Type: Task
>  Components: dependencies, security, thirdparty
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.13, 3.0.0-alpha-4
>
>
> So we can upgrade jackson to 2.13.3



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (HBASE-27098) Fix link for field comments

2022-06-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-27098.
--
Fix Version/s: 2.5.0
   3.0.0-alpha-4
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix link for field comments
> ---
>
> Key: HBASE-27098
> URL: https://issues.apache.org/jira/browse/HBASE-27098
> Project: HBase
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> Fix link for field `REJECT_BATCH_ROWS_OVER_THRESHOLD` comments.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (HBASE-27150) TestMultiRespectsLimits consistently failing

2022-06-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HBASE-27150:


 Summary: TestMultiRespectsLimits consistently failing
 Key: HBASE-27150
 URL: https://issues.apache.org/jira/browse/HBASE-27150
 Project: HBase
  Issue Type: Test
Affects Versions: 2.4.12
Reporter: Viraj Jasani
 Fix For: 2.5.0, 2.4.13, 3.0.0-alpha-4


TestMultiRespectsLimits#testBlockMultiLimits is consistently failing:
{code:java}
Error Messageexceptions (0) should be greater than 
0Stacktracejava.lang.AssertionError: exceptions (0) should be greater than 0
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at 
org.apache.hadoop.hbase.test.MetricsAssertHelperImpl.assertCounterGt(MetricsAssertHelperImpl.java:191)
at 
org.apache.hadoop.hbase.client.TestMultiRespectsLimits.testBlockMultiLimits(TestMultiRespectsLimits.java:185)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 {code}
Reports:

[https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3377/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]

[https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3378/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]

[https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3376/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27151) TestMultiRespectsLimits.testBlockMultiLimits repeatable failure

2022-06-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17558177#comment-17558177
 ] 

Viraj Jasani commented on HBASE-27151:
--

Thanks [~apurtell], let me mark HBASE-27150 as duplicate :)

> TestMultiRespectsLimits.testBlockMultiLimits repeatable failure
> ---
>
> Key: HBASE-27151
> URL: https://issues.apache.org/jira/browse/HBASE-27151
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.4.12
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Fix For: 2.4.13
>
>
> [ERROR] 
> org.apache.hadoop.hbase.client.TestMultiRespectsLimits.testBlockMultiLimits  
> Time elapsed: 1.414 s  <<< FAILURE!
> java.lang.AssertionError: exceptions (0) should be greater than 0
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> org.apache.hadoop.hbase.test.MetricsAssertHelperImpl.assertCounterGt(MetricsAssertHelperImpl.java:191)
>   at 
> org.apache.hadoop.hbase.client.TestMultiRespectsLimits.testBlockMultiLimits(TestMultiRespectsLimits.java:185)
> A git bisect identified HBASE-26856 as the cause.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-27150) TestMultiRespectsLimits consistently failing

2022-06-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17558178#comment-17558178
 ] 

Viraj Jasani commented on HBASE-27150:
--

Resolved by HBASE-27151

> TestMultiRespectsLimits consistently failing
> 
>
> Key: HBASE-27150
> URL: https://issues.apache.org/jira/browse/HBASE-27150
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.4.12
>Reporter: Viraj Jasani
>Priority: Major
> Fix For: 2.5.0, 2.4.13, 3.0.0-alpha-4
>
>
> TestMultiRespectsLimits#testBlockMultiLimits is consistently failing:
> {code:java}
> Error Messageexceptions (0) should be greater than 
> 0Stacktracejava.lang.AssertionError: exceptions (0) should be greater than 0
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> org.apache.hadoop.hbase.test.MetricsAssertHelperImpl.assertCounterGt(MetricsAssertHelperImpl.java:191)
>   at 
> org.apache.hadoop.hbase.client.TestMultiRespectsLimits.testBlockMultiLimits(TestMultiRespectsLimits.java:185)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  {code}
> Reports:
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3377/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3378/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3376/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (HBASE-27150) TestMultiRespectsLimits consistently failing

2022-06-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-27150.
--
Fix Version/s: (was: 2.5.0)
   (was: 2.4.13)
   (was: 3.0.0-alpha-4)
   Resolution: Duplicate

> TestMultiRespectsLimits consistently failing
> 
>
> Key: HBASE-27150
> URL: https://issues.apache.org/jira/browse/HBASE-27150
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.4.12
>Reporter: Viraj Jasani
>Priority: Major
>
> TestMultiRespectsLimits#testBlockMultiLimits is consistently failing:
> {code:java}
> Error Messageexceptions (0) should be greater than 
> 0Stacktracejava.lang.AssertionError: exceptions (0) should be greater than 0
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> org.apache.hadoop.hbase.test.MetricsAssertHelperImpl.assertCounterGt(MetricsAssertHelperImpl.java:191)
>   at 
> org.apache.hadoop.hbase.client.TestMultiRespectsLimits.testBlockMultiLimits(TestMultiRespectsLimits.java:185)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  {code}
> Reports:
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3377/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3378/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
> [https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/branch-2.4/3376/testReport/junit/org.apache.hadoop.hbase.client/TestMultiRespectsLimits/testBlockMultiLimits/]
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22728:
-
Attachment: HBASE-22728.branch-1.16.patch

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)
Viraj Jasani created HBASE-22838:


 Summary: assembly:single failure: user id or group id 'x' is 
too big
 Key: HBASE-22838
 URL: https://issues.apache.org/jira/browse/HBASE-22838
 Project: HBase
  Issue Type: Bug
Affects Versions: 3.0.0, 1.5.0, 2.3.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


 

tarball build with command fails with user id(mac) or group id(ubuntu) too big 
error:
{code:java}
$ mvn clean install package assembly:single -DskipTests



[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
project hbase-assembly: Execution default-cli of goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
'' is too big ( > 2097151 ). -> [Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

[ERROR]

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hbase-assembly
{code}
To avoid this error and to get better features for tarball build, we should 
upgrade tarLongFileMode from gnu to posix: MPOM-132

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Fix Version/s: 2.3.0
   1.5.0
   3.0.0
   Status: Patch Available  (was: In Progress)

> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Description: 
 

tarball build with assembly:single command fails with user id(mac) or group 
id(ubuntu) too big error:
{code:java}
$ mvn clean install package assembly:single -DskipTests



[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
project hbase-assembly: Execution default-cli of goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
'' is too big ( > 2097151 ). -> [Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

[ERROR]

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hbase-assembly
{code}
To avoid this error and to get better features for tarball build, we should 
upgrade tarLongFileMode from gnu to posix: MPOM-132

 

  was:
 

tarball build with command fails with user id(mac) or group id(ubuntu) too big 
error:
{code:java}
$ mvn clean install package assembly:single -DskipTests



[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
project hbase-assembly: Execution default-cli of goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
'' is too big ( > 2097151 ). -> [Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

[ERROR]

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hbase-assembly
{code}
To avoid this error and to get better features for tarball build, we should 
upgrade tarLongFileMode from gnu to posix: MPOM-132

 


> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22838 started by Viraj Jasani.

> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Affects Version/s: (was: 1.5.0)

> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-12 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Fix Version/s: (was: 1.5.0)

> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-13 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Description: 
 

tarball build with assembly:single command fails with user id(mac) or group 
id(ubuntu) too big error:
{code:java}
$ mvn clean install package assembly:single -DskipTests



[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
project hbase-assembly: Execution default-cli of goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
'' is too big ( > 2097151 ). -> [Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

[ERROR]

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hbase-assembly
{code}
To avoid this error and to get better features for tarball build, we should 
upgrade tarLongFileMode from gnu to posix: MPOM-132

This works for assembly plugin >= 2.5.0: MASSEMBLY-728

 

  was:
 

tarball build with assembly:single command fails with user id(mac) or group 
id(ubuntu) too big error:
{code:java}
$ mvn clean install package assembly:single -DskipTests



[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
project hbase-assembly: Execution default-cli of goal 
org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
'' is too big ( > 2097151 ). -> [Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

[ERROR]

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hbase-assembly
{code}
To avoid this error and to get better features for tarball build, we should 
upgrade tarLongFileMode from gnu to posix: MPOM-132

 


> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6, 1.3.6, 1.4.11
>
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
> This works for assembly plugin >= 2.5.0: MASSEMBLY-728
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22838) assembly:single failure: user id or group id 'xxxxx' is too big

2019-08-13 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22838:
-
Affects Version/s: 1.5.0

> assembly:single failure: user id or group id 'x' is too big
> ---
>
> Key: HBASE-22838
> URL: https://issues.apache.org/jira/browse/HBASE-22838
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6, 1.3.6, 1.4.11
>
>
>  
> tarball build with assembly:single command fails with user id(mac) or group 
> id(ubuntu) too big error:
> {code:java}
> $ mvn clean install package assembly:single -DskipTests
> 
> 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (default-cli) on 
> project hbase-assembly: Execution default-cli of goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single failed: user id 
> '' is too big ( > 2097151 ). -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hbase-assembly
> {code}
> To avoid this error and to get better features for tarball build, we should 
> upgrade tarLongFileMode from gnu to posix: MPOM-132
> This works for assembly plugin >= 2.5.0: MASSEMBLY-728
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-13 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904381#comment-16904381
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/13/19 7:00 PM:
---

'test' would not work for hbase-server since it has exposure of jackson 
dependency in source code. May be we can move to fasterxml.jackson for 
hbase-server too? 

Eventually we can backport HBASE-20587 to branch-1 but as part of this Jira, 
since we are moving to fasterxml.jackson for hbase-rest, may be we can stick to 
it for hbase-server too. Let me give it a shot and see if everything goes good 
including unpacking tarball and bringing up HMaster.


was (Author: vjasani):
'test' would not work for hbase-server since it has exposure of jackson 
dependency in source code. May be we can move to fasterxml.jackson for 
hbase-server too and keep it at 'compile' scope(safer latest version)? 

Eventually we can backport HBASE-20587 to branch-1 but as part of this Jira, 
since we are moving to fasterxml.jackson for hbase-rest, may be we can stick to 
it for hbase-server too. Let me give it a shot and see if everything goes good 
including unpacking tarball and bringing up HMaster.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-13 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906585#comment-16906585
 ] 

Viraj Jasani commented on HBASE-22728:
--

With current patch, this is what compile scope dependencies look like: (Every 
other module has test/provided)

 
{code:java}
[INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-common ---
[INFO] org.apache.hbase:hbase-common:jar:1.5.0-SNAPSHOT
[INFO] +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:provided
[INFO] |  \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:provided
[INFO] +- 
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.9.9:compile
[INFO] |  +- com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.9.9:compile
[INFO] |  \- 
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.9.9:compile
[INFO] +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.9:compile
[INFO] +- com.fasterxml.jackson.core:jackson-core:jar:2.9.9:compile
[INFO] \- com.fasterxml.jackson.core:jackson-databind:jar:2.9.9.2:compile
[INFO] 
{code}
 
{code:java}
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-rest ---
[INFO] org.apache.hbase:hbase-rest:jar:1.5.0-SNAPSHOT
[INFO] +- 
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.9.9:compile
[INFO] |  +- com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.9.9:compile
[INFO] |  \- 
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.9.9:compile
[INFO] +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.9:compile
[INFO] +- com.fasterxml.jackson.core:jackson-core:jar:2.9.9:compile
[INFO] +- com.fasterxml.jackson.core:jackson-databind:jar:2.9.9.2:compile
[INFO] \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:test
[INFO]\- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:test
[INFO] {code}
{code:java}
[INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-shell ---
[INFO] org.apache.hbase:hbase-shell:jar:1.5.0-SNAPSHOT
[INFO] +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-assembly ---
[INFO] org.apache.hbase:hbase-assembly:pom:1.5.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-rest:jar:1.5.0-SNAPSHOT:compile
[INFO] |  +- 
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.9.9:compile
[INFO] |  |  +- com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.9.9:compile
[INFO] |  |  \- 
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.9.9:compile
[INFO] |  +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.9:compile
[INFO] |  +- com.fasterxml.jackson.core:jackson-core:jar:2.9.9:compile
[INFO] |  \- com.fasterxml.jackson.core:jackson-databind:jar:2.9.9.2:compile
[INFO] \- org.apache.hbase:hbase-shell:jar:1.5.0-SNAPSHOT:compile
[INFO]\- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
{code}
hbase-shell needs jackson-core-asl:1.9.13 until we upgrade JRuby as per your 
recent suggestion. However, this is not vulnerable and 
jackson-mapper-asl:1.9.13 is vulnerable.

 

Everywhere else in the code, Jackson1 is replaced by Jackson 2(I think better 
we do now). Tested HMaster start, rest start, shell with tar. Unit tests look 
good. Requesting your review on 016 patch.

 

 

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-14 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22728:
-
Attachment: HBASE-22728.branch-1.18.patch

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-14 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907025#comment-16907025
 ] 

Viraj Jasani commented on HBASE-22728:
--

Thanks. Sure it should be good enough to call out in release note. With latest 
patch attached, we are including vulnerable mapper only in hbase-common and 
hbase-assembly.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-14 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907025#comment-16907025
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/14/19 8:42 AM:
---

Thanks. Sure it should be good enough to call out in release note. With latest 
patch attached, we are including vulnerable mapper in hbase-common and 
hbase-assembly.


was (Author: vjasani):
Thanks. Sure it should be good enough to call out in release note. With latest 
patch attached, we are including vulnerable mapper only in hbase-common and 
hbase-assembly.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907889#comment-16907889
 ] 

Viraj Jasani commented on HBASE-22728:
--

Oh yes, I just saw one project having hbase-common dependency. Hence, 
hbase-common should have provided scope for Jackson1.

The only issue is without including dependencies at compile scope in 
hbase-common, they are not getting included as jar with assembly:single tar. 
Let me see what we can do here, may be some changes in hbase-assembly could 
help.

Initially I tried including Jackson1 mapper as compile scope only in 
hbase-assembly, but that didn't even have jackson*jar included in lib of 
extracted tarball.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907889#comment-16907889
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/15/19 8:03 AM:
---

Oh yes, I just saw one project having hbase-common dependency. Hence, 
hbase-common should have provided scope for Jackson1.

The only issue is without including dependencies at compile scope in 
hbase-common, they are not getting included as jar with assembly:single tar. 
Let me see what we can do here, may be some changes in hbase-assembly could 
help.

Initially I tried including Jackson1 mapper as compile scope only in 
hbase-assembly(everywhere else had provided), but that didn't even include jar 
in lib of extracted tarball.


was (Author: vjasani):
Oh yes, I just saw one project having hbase-common dependency. Hence, 
hbase-common should have provided scope for Jackson1.

The only issue is without including dependencies at compile scope in 
hbase-common, they are not getting included as jar with assembly:single tar. 
Let me see what we can do here, may be some changes in hbase-assembly could 
help.

Initially I tried including Jackson1 mapper as compile scope only in 
hbase-assembly, but that didn't even have jackson*jar included in lib of 
extracted tarball.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907926#comment-16907926
 ] 

Viraj Jasani commented on HBASE-22728:
--

Just a small summary so far:
 # Replaced all vulnerable mapper dependency(jackson-mapper-asl) with Jackson2 
mapper(jackson-databind) in all modules.
 # Included Jackson2 at compile scope in hbase-rest.
 # hbase-shell requires dependency of jackson-core-asl. To tackle this, we 
might need to upgrade JRuby eventually. For now, it's fine to include 
jackson-core-asl(not vulnerable).
 # Since HBase code no longer needs jackson-mapper-asl( #1), we can live 
without it, but once we generate tar and extract it, we get these warnings 
since Hadoop requires this dependency: 
{code:java}
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: Cannot load filesystem: 
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem could not be instantiated
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: 
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper{code}

 # Without including jackson-mapper-asl / Jackson2 dependencies as 'compile' 
scope in hbase-common, we are not getting corresponding jars in lib folder of 
extracted tarball. Need to resolve this issue since we should not include 
jackson-mapper-asl with 'compile' scope in hbase-common/hbase-client/dependent 
hbase-* of client.

 

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907926#comment-16907926
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/15/19 10:49 AM:


Just a small summary so far:
 * Replaced all vulnerable mapper dependency(jackson-mapper-asl) with Jackson2 
mapper(jackson-databind) in all modules.
 * Included Jackson2 at compile scope in hbase-rest.
 * hbase-shell requires dependency of jackson-core-asl. To tackle this, we 
might need to upgrade JRuby eventually. For now, it's fine to include 
jackson-core-asl(not vulnerable).
 * Since HBase branch-1 no longer needs jackson-mapper-asl(as per #1), we can 
live without it, but once we generate tar and extract it, we get these warnings 
since Hadoop requires this dependency: 
{code:java}
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: Cannot load filesystem: 
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem could not be instantiated
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: 
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper{code}

 * Without including jackson-mapper-asl / Jackson2 dependencies as 'compile' 
scope in hbase-common, we are not getting corresponding jars in lib folder of 
extracted tarball. Need to resolve this issue since we should not include 
jackson-mapper-asl with 'compile' scope in hbase-common/hbase-client/dependent 
hbase-* of client.

 


was (Author: vjasani):
Just a small summary so far:
 # Replaced all vulnerable mapper dependency(jackson-mapper-asl) with Jackson2 
mapper(jackson-databind) in all modules.
 # Included Jackson2 at compile scope in hbase-rest.
 # hbase-shell requires dependency of jackson-core-asl. To tackle this, we 
might need to upgrade JRuby eventually. For now, it's fine to include 
jackson-core-asl(not vulnerable).
 # Since HBase code no longer needs jackson-mapper-asl( #1), we can live 
without it, but once we generate tar and extract it, we get these warnings 
since Hadoop requires this dependency: 
{code:java}
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: Cannot load filesystem: 
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem could not be instantiated
2019-08-13 16:32:34,147 WARN  [main] fs.FileSystem: 
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper{code}

 # Without including jackson-mapper-asl / Jackson2 dependencies as 'compile' 
scope in hbase-common, we are not getting corresponding jars in lib folder of 
extracted tarball. Need to resolve this issue since we should not include 
jackson-mapper-asl with 'compile' scope in hbase-common/hbase-client/dependent 
hbase-* of client.

 

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22728:
-
Attachment: HBASE-22728.branch-1.19.patch

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908020#comment-16908020
 ] 

Viraj Jasani commented on HBASE-22728:
--

Ok so we need to upgrade maven assembly plugin too :)

With older version, jars not included at 'compile' scope in hbase-common are 
not getting included in tarball with assembly:single goal.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908146#comment-16908146
 ] 

Viraj Jasani commented on HBASE-22728:
--

{quote}this is surprising. Can we not just include the needed library as a 
dependency of the hbase-assembly module?
{quote}
So far, this was not happening but after upgrading assembly plugin from 2.5 to 
3.1.1, now we can include the needed lib as hbase-assembly dependency.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908146#comment-16908146
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/15/19 3:02 PM:
---

{quote}this is surprising. Can we not just include the needed library as a 
dependency of the hbase-assembly module?
{quote}
So far, this was not happening but after upgrading assembly plugin from 2.5 to 
3.1.1, now we can include the needed lib as hbase-assembly dependency.

 

master branch has assembly plugin version 3.0.0


was (Author: vjasani):
{quote}this is surprising. Can we not just include the needed library as a 
dependency of the hbase-assembly module?
{quote}
So far, this was not happening but after upgrading assembly plugin from 2.5 to 
3.1.1, now we can include the needed lib as hbase-assembly dependency.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-15 Thread Viraj Jasani (JIRA)
Viraj Jasani created HBASE-22863:


 Summary: Avoid Jackson versions and dependencies with known CVEs
 Key: HBASE-22863
 URL: https://issues.apache.org/jira/browse/HBASE-22863
 Project: HBase
  Issue Type: Bug
  Components: dependencies
Affects Versions: 3.0.0, 2.3.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
include it at test scope if required but definitely exclude it from 
corresponding Hadoop dependencies.

Moreover, fasterxml.jackson mapper is used only in hbase-rest tests but we pull 
it in with 'compile' scope. May be we can include it as 'test' scope only and 
cleanup Jackson dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-15 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Description: 
Part of forwardport from branch-1 Jira: 

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
include it at test scope if required but definitely exclude it from 
corresponding Hadoop dependencies.

Moreover, fasterxml.jackson mapper is used only in hbase-rest tests but we pull 
it in with 'compile' scope. May be we can include it as 'test' scope only and 
cleanup Jackson dependencies.

  was:
Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:ja

[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-15 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Description: 
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
include it at test scope if required but definitely exclude it from 
corresponding Hadoop dependencies.

Moreover, fasterxml.jackson mapper is used only in hbase-rest tests but we pull 
it in with 'compile' scope. May be we can include it as 'test' scope only and 
cleanup Jackson dependencies.

  was:
Part of forwardport from branch-1 Jira: 

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.

[jira] [Commented] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-15 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908789#comment-16908789
 ] 

Viraj Jasani commented on HBASE-22728:
--

If there is possibility of 1.3 release, let me work on backport to 1.3?

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5, 1.3.6
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22728:
-
Release Note: 
1. Stopped using Jackson1(org.codehaus.jackson*) in HBase code base. 
2. Upgraded to Jackson2(com.fasterxml.jackson*) instead. 
3. Stopped exposing vulnerable Jackson1 dependencies 
(org.codehaus.jackson:jackson-mapper-asl:1.9.13) so that downstreamers would 
not pull it in from HBase.
4. However, since Hadoop requires this dependency, put vulnerable jackson at 
compile scope in hbase-assembly module so that tarball generated contains this 
mapper jar in lib. Still, downsteam applications can't pull in Jackson1 from 
HBase.
5. Upgraded maven assembly plugin to 3.1.1.

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5, 1.3.6
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-16 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908789#comment-16908789
 ] 

Viraj Jasani edited comment on HBASE-22728 at 8/16/19 7:11 AM:
---

If there is possibility of 1.3 release, let me work on backport to branch-1.3?


was (Author: vjasani):
If there is possibility of 1.3 release, let me work on backport to 1.3?

> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5, 1.3.6
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22728) Upgrade jackson dependencies in branch-1

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22728:
-
Release Note: 
1. Stopped using Jackson1(org.codehaus.jackson*) in HBase code base. 
2. Upgraded to Jackson2(com.fasterxml.jackson*) instead. 
3. Stopped exposing vulnerable Jackson1 dependencies 
(org.codehaus.jackson:jackson-mapper-asl:1.9.13) so that downstreamers would 
not pull it in from HBase.
4. However, since Hadoop requires this dependency, put vulnerable jackson at 
compile scope in hbase-assembly module so that HBase tarball contains this 
mapper jar in lib. Still, downsteam applications can't pull in Jackson1 from 
HBase.
5. Upgraded maven assembly plugin to 3.1.1.

  was:
1. Stopped using Jackson1(org.codehaus.jackson*) in HBase code base. 
2. Upgraded to Jackson2(com.fasterxml.jackson*) instead. 
3. Stopped exposing vulnerable Jackson1 dependencies 
(org.codehaus.jackson:jackson-mapper-asl:1.9.13) so that downstreamers would 
not pull it in from HBase.
4. However, since Hadoop requires this dependency, put vulnerable jackson at 
compile scope in hbase-assembly module so that tarball generated contains this 
mapper jar in lib. Still, downsteam applications can't pull in Jackson1 from 
HBase.
5. Upgraded maven assembly plugin to 3.1.1.


> Upgrade jackson dependencies in branch-1
> 
>
> Key: HBASE-22728
> URL: https://issues.apache.org/jira/browse/HBASE-22728
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.10, 1.3.5, 1.3.6
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22728-addendum.patch, HBASE-22728-addendum.patch, 
> HBASE-22728.branch-1.01.patch, HBASE-22728.branch-1.02.patch, 
> HBASE-22728.branch-1.04.patch, HBASE-22728.branch-1.06.patch, 
> HBASE-22728.branch-1.10.patch, HBASE-22728.branch-1.11.patch, 
> HBASE-22728.branch-1.12.patch, HBASE-22728.branch-1.14.patch, 
> HBASE-22728.branch-1.15.patch, HBASE-22728.branch-1.16.patch, 
> HBASE-22728.branch-1.18.patch, HBASE-22728.branch-1.19.patch
>
>
> Avoid Jackson versions and dependencies with known CVEs



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-16 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908810#comment-16908810
 ] 

Viraj Jasani commented on HBASE-22866:
--

slf4j-log4j12-1.7.10 is coming from hadoop-common:
{code:java}
[INFO] |  \- org.apache.hadoop:hadoop-common:jar:2.8.5:compile
[INFO] | \- org.slf4j:slf4j-log4j12:jar:1.7.10:compile
{code}
slf4j-log4j12-1.6.1 is coming from zookeeper:
{code:java}
[INFO] \- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
[INFO]\- org.slf4j:slf4j-log4j12:jar:1.6.1:compile
{code}

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.5.0
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Description: 
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
include it at test scope if required but definitely exclude it from 
corresponding Hadoop dependencies.

 

  was:
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
in

[jira] [Commented] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908839#comment-16908839
 ] 

Viraj Jasani commented on HBASE-22863:
--

Thanks [~Apache9]

Just saw that fasterxml.jackson is being used in some places in hbase-rest 
overall. Updated the description and this Jira would be all about removing 
dependencies of codehaus.jackson from master and branch-2.

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency 
> from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> jackson-mapper-asl is not being used in HBase code anymore and hence, we 
> should include it at test scope if required but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Description: 
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in some vulnerable jackson dependencies (e.g. 
jackson-mapper-asl:1.9.13) from Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore and hence, we should 
include it at test scope if required but definitely exclude it from 
corresponding Hadoop dependencies.

 

  was:
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in vulnerable jackson-mapper-asl:1.9.13 dependency from 
Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being used in HBase code anymore an

[jira] [Work started] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22863 started by Viraj Jasani.

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: HBASE-22863.master.000.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> jackson-mapper-asl is not being used in HBase code anymore and hence, we 
> should include it at test scope if required but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Fix Version/s: 2.3.0
   3.0.0
   Attachment: HBASE-22863.master.000.patch
   Status: Patch Available  (was: In Progress)

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.master.000.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> jackson-mapper-asl is not being used in HBase code anymore and hence, we 
> should include it at test scope if required but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908968#comment-16908968
 ] 

Viraj Jasani commented on HBASE-22863:
--

Uploaded the patch. Please review.

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.master.000.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> jackson-mapper-asl is not being used in HBase code anymore and hence, we 
> should include it at test scope if required but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Description: 
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in some vulnerable jackson dependencies (e.g. 
jackson-mapper-asl:1.9.13) from Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
Jackson1 is not being used in HBase code anymore and hence, we should include 
it only at test scope if required by Hadoop but definitely exclude it from 
corresponding Hadoop dependencies.

 

  was:
Partly forwardport from branch-1 Jira: HBASE-22728

Even though master and branch-2 have moved away from Jackson1 some time back, 
HBase is still pulling in some vulnerable jackson dependencies (e.g. 
jackson-mapper-asl:1.9.13) from Hadoop:

 
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
---
[INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
[INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
[INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
[INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
[INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
[INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
{code:java}
[INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
hbase-shaded-testing-util ---
[INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
[INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
[INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
[INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
{code:java}
[INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
[INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
[INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
[INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
[INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
[INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
[INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
{code}
jackson-mapper-asl is not being us

[jira] [Commented] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-16 Thread Viraj Jasani (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909582#comment-16909582
 ] 

Viraj Jasani commented on HBASE-22866:
--

Oh yes, the top level parent pom has common version defined for slf4j-api but 
not for slf4j-log4j12.

Also, let's take this opportunity to upgrade slf4j version to 1.7.25(similar to 
master and branch-2). Should not be a big change, let me test and upload the 
patch quickly.

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.5.0
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22866:
-
Attachment: HBASE-22866.branch-1.000.patch

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-22866.branch-1.000.patch
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-16 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22866:
-
Status: Patch Available  (was: Open)

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-22866.branch-1.000.patch
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-17 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22866:
-
Attachment: HBASE-22866.branch-1.000.patch

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-22866.branch-1.000.patch, 
> HBASE-22866.branch-1.000.patch
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-17 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22760 started by Viraj Jasani.

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-17 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Fix Version/s: 1.4.11
   2.2.1
   2.3.0
   1.5.0
   3.0.0
   Status: Patch Available  (was: In Progress)

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-17 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: HBASE-22760.master.003.patch

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-18 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: HBASE-22760.master.004.patch

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-18 Thread Viraj Jasani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: HBASE-22760.master.005.patch

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Attachment: HBASE-22863.master.001.patch

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.master.000.patch, 
> HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (HBASE-22866) Multiple slf4j-log4j provider versions included in binary package (branch-1)

2019-08-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HBASE-22866:


Assignee: Viraj Jasani

> Multiple slf4j-log4j provider versions included in binary package (branch-1)
> 
>
> Key: HBASE-22866
> URL: https://issues.apache.org/jira/browse/HBASE-22866
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-22866.branch-1.000.patch, 
> HBASE-22866.branch-1.000.patch
>
>
> Examining binary assembly results there are multiple versions of slf4j-log4j 
> in lib/
> {noformat}
> slf4j-api-1.7.7.jar
> slf4j-log4j12-1.6.1.jar
> slf4j-log4j12-1.7.10.jar
> slf4j-log4j12-1.7.7.jar
> {noformat}
> We aren't managing slf4j-log4j12 dependency versions correctly, somehow. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22822) Re/Un-schedule balancer chore for balance_switch

2019-08-20 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911495#comment-16911495
 ] 

Viraj Jasani commented on HBASE-22822:
--

Based on the discussion on PR, closing the JIRA

> Re/Un-schedule balancer chore for balance_switch
> 
>
> Key: HBASE-22822
> URL: https://issues.apache.org/jira/browse/HBASE-22822
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer, master
>Affects Versions: 3.0.0, 1.5.0, 2.2.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.2
>
>
> balance_switch turns on/off balancer. When it is turned off, we don't remove 
> balancer chore from scheduled chores hence it keeps running only to 
> eventually find out that it is not supposed to perform any action(if balancer 
> was turned off). We can unschedule the chore to prevent the chore() execution 
> and reschedule it when it is turned on by balance_switch.
> This should also facilitate running balancer immediately after triggering 
> balance_switch true, and then chore would continue running as per duration 
> provided in hbase.balancer.period.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22822) Re/Un-schedule balancer chore for balance_switch

2019-08-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22822:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Re/Un-schedule balancer chore for balance_switch
> 
>
> Key: HBASE-22822
> URL: https://issues.apache.org/jira/browse/HBASE-22822
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer, master
>Affects Versions: 3.0.0, 1.5.0, 2.2.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.2
>
>
> balance_switch turns on/off balancer. When it is turned off, we don't remove 
> balancer chore from scheduled chores hence it keeps running only to 
> eventually find out that it is not supposed to perform any action(if balancer 
> was turned off). We can unschedule the chore to prevent the chore() execution 
> and reschedule it when it is turned on by balance_switch.
> This should also facilitate running balancer immediately after triggering 
> balance_switch true, and then chore would continue running as per duration 
> provided in hbase.balancer.period.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Attachment: HBASE-22863.branch-2.000.patch

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.branch-2.000.patch, 
> HBASE-22863.master.000.patch, HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912031#comment-16912031
 ] 

Viraj Jasani commented on HBASE-22002:
--

[~Apache9]

alter_status command is still using admin.getAlterStatus(TabelName): 
[https://github.com/apache/hbase/blob/master/hbase-shell/src/main/ruby/hbase/admin.rb#L653]

Should we provide a new method with Future.get or bring back old method?

Thanks

> Remove the deprecated methods in Admin interface
> 
>
> Key: HBASE-22002
> URL: https://issues.apache.org/jira/browse/HBASE-22002
> Project: HBase
>  Issue Type: Task
>  Components: Admin, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22002-test.patch, HBASE-22002-test.patch, 
> HBASE-22002-v1.patch, HBASE-22002-v2.patch, HBASE-22002-v3.patch, 
> HBASE-22002-v4.patch, HBASE-22002-v5.patch, HBASE-22002.patch
>
>
> For API cleanup, and will make the work in HBASE-21718 a little easier.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912343#comment-16912343
 ] 

Viraj Jasani commented on HBASE-22760:
--

There are 2 shell commands to enable/disable/query snapshot auto cleanup 
activity:

1. snapshot_auto_cleanup_switch

2. snapshot_auto_cleanup_enabled

Please review the patch 005.

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912351#comment-16912351
 ] 

Viraj Jasani commented on HBASE-22863:
--

Thanks for the review [~Apache9] [~reidchan]

For branch-2, the patch is attached. Could you please let me know what other 
2.* branch this should be committed to so that I can create patch for them?

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.branch-2.000.patch, 
> HBASE-22863.master.000.patch, HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912362#comment-16912362
 ] 

Viraj Jasani edited comment on HBASE-22863 at 8/21/19 2:49 PM:
---

The only issue I observed was that while applying master patch to branch-2, 
 in parent pom.xml got jumbled up and went to some other dependency, 
e.g. instead of , exclusions went to 
.

Hence, had to generate the patch for branch-2, but I believe mostly branch-2.1, 
2.2 should be fine. Anyways, I will reverify once.

Thanks


was (Author: vjasani):
The only issue was I observed was that while applying master patch to branch-2, 
 in parent pom.xml got jumbled up and went to some other dependency, 
e.g. instead of , exclusions went to 
.

Hence, had to generate the patch for branch-2, but I believe mostly branch-2.1, 
2.2 should be fine. Anyways, I will reverify once.

Thanks

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.branch-2.000.patch, 
> HBASE-22863.master.000.patch, HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912362#comment-16912362
 ] 

Viraj Jasani commented on HBASE-22863:
--

The only issue was I observed was that while applying master patch to branch-2, 
 in parent pom.xml got jumbled up and went to some other dependency, 
e.g. instead of , exclusions went to 
.

Hence, had to generate the patch for branch-2, but I believe mostly branch-2.1, 
2.2 should be fine. Anyways, I will reverify once.

Thanks

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22863.branch-2.000.patch, 
> HBASE-22863.master.000.patch, HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22863) Avoid Jackson versions and dependencies with known CVEs

2019-08-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22863:
-
Release Note: 
1. Stopped exposing vulnerable Jackson1 dependencies so that downstreamers 
would not pull it in from HBase.
2. However, since Hadoop requires some Jackson1 dependencies, put vulnerable 
Jackson mapper at test scope in some HBase modules and hence, HBase tarball 
created by hbase-assembly contains Jackson1 mapper jar in lib. Still, downsteam 
applications can't pull in Jackson1 from HBase.

> Avoid Jackson versions and dependencies with known CVEs
> ---
>
> Key: HBASE-22863
> URL: https://issues.apache.org/jira/browse/HBASE-22863
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.6
>
> Attachments: HBASE-22863.branch-2.000.patch, 
> HBASE-22863.master.000.patch, HBASE-22863.master.001.patch
>
>
> Partly forwardport from branch-1 Jira: HBASE-22728
> Even though master and branch-2 have moved away from Jackson1 some time back, 
> HBase is still pulling in some vulnerable jackson dependencies (e.g. 
> jackson-mapper-asl:1.9.13) from Hadoop:
>  
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ hbase-mapreduce 
> ---
> [INFO] org.apache.hbase:hbase-mapreduce:jar:3.0.0-SNAPSHOT
> [INFO] +- org.apache.hbase:hbase-server:jar:3.0.0-SNAPSHOT:compile
> [INFO] |  \- org.apache.hbase:hbase-http:jar:3.0.0-SNAPSHOT:compile
> [INFO] | \- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] +- 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:2.8.5:test
> [INFO] |  \- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.8.5:compile
> [INFO]\- org.apache.hadoop:hadoop-yarn-common:jar:2.8.5:compile
> [INFO]   +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile{code}
> {code:java}
> [INFO] --- maven-dependency-plugin:3.1.1:tree (default-cli) @ 
> hbase-shaded-testing-util ---
> [INFO] org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:compile
> [INFO]+- com.sun.jersey:jersey-json:jar:1.9:compile
> [INFO]|  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:compile
> [INFO]|  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
> [INFO]+- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]\- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile{code}
> {code:java}
> [INFO] org.apache.hbase:hbase-shaded-testing-util-tester:jar:3.0.0-SNAPSHOT
> [INFO] \- org.apache.hbase:hbase-shaded-testing-util:jar:3.0.0-SNAPSHOT:test
> [INFO]\- org.apache.hadoop:hadoop-common:test-jar:tests:2.8.5:test
> [INFO]   +- com.sun.jersey:jersey-json:jar:1.9:test
> [INFO]   |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:test
> [INFO]   |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:test
> [INFO]   +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO]   \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> {code}
> Jackson1 is not being used in HBase code anymore and hence, we should include 
> it only at test scope if required by Hadoop but definitely exclude it from 
> corresponding Hadoop dependencies.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912343#comment-16912343
 ] 

Viraj Jasani edited comment on HBASE-22760 at 8/21/19 5:23 PM:
---

The patch includes 2 shell commands to enable/disable/query snapshot auto 
cleanup activity:

1. snapshot_auto_cleanup_switch

2. snapshot_auto_cleanup_enabled

Please review the patch 005.


was (Author: vjasani):
There are 2 shell commands to enable/disable/query snapshot auto cleanup 
activity:

1. snapshot_auto_cleanup_switch

2. snapshot_auto_cleanup_enabled

Please review the patch 005.

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)
> hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Description: 
For any scheduled snapshot backup activity, we would like to disable 
auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
config to disable snapshot auto-cleaner: hbase.master.cleaner.snapshot.disable, 
which would take effect only upon HMaster restart just similar to any other 
hbase-site configs.

For any running cluster, we should be able to stop/resume auto-cleanup activity 
for snapshot based on shell command. Something similar to below command should 
be able to stop/start cleanup chore:

hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable auto-cleaner)

hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)

  was:
For any scheduled snapshot backup activity, we would like to disable 
auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
config to disable snapshot auto-cleaner: hbase.master.cleaner.snapshot.disable, 
which would take effect only upon HMaster restart just similar to any other 
hbase-site configs.

For any running cluster, we should be able to stop/resume auto-cleanup activity 
for snapshot based on shell command. Something similar to below command should 
be able to stop/start cleanup chore:

hbase(main):001:0> auto_snapshot_cleaner false    (disable auto-cleaner)

hbase(main):001:0> auto_snapshot_cleaner true      (enable auto-cleaner)


> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22648) Snapshot TTL

2019-08-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913296#comment-16913296
 ] 

Viraj Jasani commented on HBASE-22648:
--

shell triggered stopping/resuming snapshot auto-cleanup activity for running 
cluster: HBASE-22760

> Snapshot TTL
> 
>
> Key: HBASE-22648
> URL: https://issues.apache.org/jira/browse/HBASE-22648
> Project: HBase
>  Issue Type: New Feature
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22648-branch-1.patch, HBASE-22648-branch-1.patch, 
> HBASE-22648-branch-2.patch, HBASE-22648-master-v2.patch, 
> HBASE-22648-master-v3.patch, HBASE-22648-master-v4.patch, 
> HBASE-22648-master-v5.patch, HBASE-22648-master-v6.patch, 
> HBASE-22648-master-v8.patch, HBASE-22648-master.patch, Screen Shot 2019-07-10 
> at 8.49.13 PM.png, Screen Shot 2019-07-10 at 8.52.30 PM.png, Screen Shot 
> 2019-07-10 at 9.06.36 PM.png, Screen Shot 2019-07-16 at 11.06.03 AM.png
>
>
> Snapshots have a lifecycle that is independent from the table from which they 
> are created. Although data in a table may be stored with TTL the data files 
> containing them become frozen by the snapshot. Space consumed by expired 
> cells will not be reclaimed by normal table housekeeping like compaction. 
> While this is expected it can be inconvenient at scale. When many snapshots 
> are under management and the data in various tables is expired by TTL some 
> notion of optional TTL (and optional default TTL) for snapshots could be 
> useful. It will help prevent the accumulation of junk files by automatically 
> dropping the snapshot after the assigned TTL, making their data files 
> eligible for cleaning. More comprehensive snapshot lifecycle management may 
> be considered in the future but this one case is expected to be immediately 
> useful given TTls on data are commonly applied for similar convenience. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22648) Snapshot TTL

2019-08-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913296#comment-16913296
 ] 

Viraj Jasani edited comment on HBASE-22648 at 8/22/19 1:08 PM:
---

shell triggered stopping/resuming/querying snapshot auto-cleanup activity for 
running cluster: HBASE-22760


was (Author: vjasani):
shell triggered stopping/resuming snapshot auto-cleanup activity for running 
cluster: HBASE-22760

> Snapshot TTL
> 
>
> Key: HBASE-22648
> URL: https://issues.apache.org/jira/browse/HBASE-22648
> Project: HBase
>  Issue Type: New Feature
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22648-branch-1.patch, HBASE-22648-branch-1.patch, 
> HBASE-22648-branch-2.patch, HBASE-22648-master-v2.patch, 
> HBASE-22648-master-v3.patch, HBASE-22648-master-v4.patch, 
> HBASE-22648-master-v5.patch, HBASE-22648-master-v6.patch, 
> HBASE-22648-master-v8.patch, HBASE-22648-master.patch, Screen Shot 2019-07-10 
> at 8.49.13 PM.png, Screen Shot 2019-07-10 at 8.52.30 PM.png, Screen Shot 
> 2019-07-10 at 9.06.36 PM.png, Screen Shot 2019-07-16 at 11.06.03 AM.png
>
>
> Snapshots have a lifecycle that is independent from the table from which they 
> are created. Although data in a table may be stored with TTL the data files 
> containing them become frozen by the snapshot. Space consumed by expired 
> cells will not be reclaimed by normal table housekeeping like compaction. 
> While this is expected it can be inconvenient at scale. When many snapshots 
> are under management and the data in various tables is expired by TTL some 
> notion of optional TTL (and optional default TTL) for snapshots could be 
> useful. It will help prevent the accumulation of junk files by automatically 
> dropping the snapshot after the assigned TTL, making their data files 
> eligible for cleaning. More comprehensive snapshot lifecycle management may 
> be considered in the future but this one case is expected to be immediately 
> useful given TTls on data are commonly applied for similar convenience. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913428#comment-16913428
 ] 

Viraj Jasani commented on HBASE-22460:
--

Can I please take this unless someone is actively working on this? We can 
further discuss regarding default refCount threshold etc.

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914058#comment-16914058
 ] 

Viraj Jasani commented on HBASE-22460:
--

According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of region.
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi]  @Watchers

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HBASE-22460:


Assignee: Viraj Jasani

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914058#comment-16914058
 ] 

Viraj Jasani edited comment on HBASE-22460 at 8/23/19 8:50 AM:
---

According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of region.
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi] [~reidchan] @Watchers

 


was (Author: vjasani):
According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of region.
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi]  @Watchers

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (HBASE-22903) alter_status command is broken

2019-08-23 Thread Viraj Jasani (Jira)
Viraj Jasani created HBASE-22903:


 Summary: alter_status command is broken
 Key: HBASE-22903
 URL: https://issues.apache.org/jira/browse/HBASE-22903
 Project: HBase
  Issue Type: Bug
  Components: Admin, shell
Affects Versions: 3.0.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


This is applicable to master branch only:
{code:java}
> alter_status 't1'

ERROR: undefined method `getAlterStatus' for 
#

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914058#comment-16914058
 ] 

Viraj Jasani edited comment on HBASE-22460 at 8/23/19 11:59 AM:


According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of 
region. (CloseRegionHandler & OpenRegionHandler)
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi] [~reidchan] @Watchers

 


was (Author: vjasani):
According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of region.
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi] [~reidchan] @Watchers

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914058#comment-16914058
 ] 

Viraj Jasani edited comment on HBASE-22460 at 8/23/19 4:58 PM:
---

According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of 
region. (CloseRegionHandler & OpenRegionHandler)
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport) and let HMaster take care of region reopen for 
region with abnormal refCount. In this case, we can reuse some part of 
ReopenTableRegionsProcedure and AssignmentManager will get involved for the 
entire state management. This might not be as quick as RS doing it but might be 
preferred due to state management? (RS → Metrics → HMaster → ReopenRegion using 
procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi] [~reidchan] @Watchers

 


was (Author: vjasani):
According to me, a couple of approaches to achieve reopen of a region with very 
high refCount:
 # We can have RegionServer background thread looking into refCount of all 
regions hosted on that server and if something looks abnormal(configurable), 
RegionServer itself should close the region and open it immediately. This way 
HMaster, AssignmentManager and Reopen region procedure don't get involved since 
it is quite immediate reopen of a region using close followed by open of 
region. (CloseRegionHandler & OpenRegionHandler)
 # We can have HMaster thread looking into refCount of all regions through each 
server metrics when it is reported to HMaster by individual 
RegionServer(regionServerReport: within the scope of this or create new report 
may be) and let HMaster take care of region reopen for region with abnormal 
refCount. In this case, we can reuse some part of ReopenTableRegionsProcedure 
and AssignmentManager will get involved for the entire state management. This 
might not be as quick as RS doing it but might be preferred due to state 
management? (RS → Metrics → HMaster → ReopenRegion using procedure).

I believe 1st approach might be better since it is RegionServer who can take 
care of regions hosted on itself and it is fast and no movement of region 
involved, but 2nd might have advantage of state management?

Requesting your opinions and please let me know if I am missing something. 

[~apurtell] [~busbey] [~Apache9] [~anoop.hbase] [~openinx]  [~stack] 
[~psomogyi] [~reidchan] @Watchers

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912343#comment-16912343
 ] 

Viraj Jasani edited comment on HBASE-22760 at 8/23/19 6:20 PM:
---

The patch includes 2 shell commands to enable/disable and query(the status of) 
snapshot auto cleanup activity:
 # snapshot_auto_cleanup_switch
 # snapshot_auto_cleanup_enabled

Please review the patch 005.


was (Author: vjasani):
The patch includes 2 shell commands to enable/disable/query snapshot auto 
cleanup activity:

1. snapshot_auto_cleanup_switch

2. snapshot_auto_cleanup_enabled

Please review the patch 005.

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914843#comment-16914843
 ] 

Viraj Jasani commented on HBASE-22460:
--

{quote}Doing #1 violates Master being in charge of assign. Doing in Master is 
also the more frugal choice. #1 requires every RS running a monitoring thread. 
Instead we can run one task in Master for whole cluster to look at refcounts 
and it then does the reopen and no chance of it being surprised by 
self-ordained RS reopen.
{quote}
Sure if HMaster is preferred as initiator, good to go with #2.

 
{quote}The master can still be notified the region has closed, and then 
notified again when it has reopened. There may need to be master side changes 
to accommodate this, true.
{quote}
In this case, probably good to go with #2? Although I think no of RPC calls 
might remain same in both cases(#1: each RS->HM and #2: HM->each RS) but since 
#2 involves AM and State machine procedures (with rollback feature etc), may be 
good to pursue #2?

 
{quote}I suppose a hack an operator can do is watch ref count metrics and if 
judged to be indicative of a leak, could alter the table schema
{quote}
The only catch is it will reopen all regions of the table, not the specific one.

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-08-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914843#comment-16914843
 ] 

Viraj Jasani edited comment on HBASE-22460 at 8/24/19 10:52 AM:


{quote}Doing #1 violates Master being in charge of assign. Doing in Master is 
also the more frugal choice. #1 requires every RS running a monitoring thread. 
Instead we can run one task in Master for whole cluster to look at refcounts 
and it then does the reopen and no chance of it being surprised by 
self-ordained RS reopen.
{quote}
Sure if HMaster is preferred as initiator, good to go with #2.

 
{quote}The master can still be notified the region has closed, and then 
notified again when it has reopened. There may need to be master side changes 
to accommodate this, true.
{quote}
In this case, probably good to go with #2? Although I think no of RPC calls 
might remain same in both cases(#1: each RS->HM and #2: HM->each RS) but since 
#2 involves AM and State machine procedures (with rollback feature etc), may be 
good to pursue #2?

 
{quote}I suppose a hack an operator can do is watch ref count metrics and if 
judged to be indicative of a leak, could alter the table schema
{quote}
The only catch is it will reopen all regions of the table, not just the desired 
one.

 


was (Author: vjasani):
{quote}Doing #1 violates Master being in charge of assign. Doing in Master is 
also the more frugal choice. #1 requires every RS running a monitoring thread. 
Instead we can run one task in Master for whole cluster to look at refcounts 
and it then does the reopen and no chance of it being surprised by 
self-ordained RS reopen.
{quote}
Sure if HMaster is preferred as initiator, good to go with #2.

 
{quote}The master can still be notified the region has closed, and then 
notified again when it has reopened. There may need to be master side changes 
to accommodate this, true.
{quote}
In this case, probably good to go with #2? Although I think no of RPC calls 
might remain same in both cases(#1: each RS->HM and #2: HM->each RS) but since 
#2 involves AM and State machine procedures (with rollback feature etc), may be 
good to pursue #2?

 
{quote}I suppose a hack an operator can do is watch ref count metrics and if 
judged to be indicative of a leak, could alter the table schema
{quote}
The only catch is it will reopen all regions of the table, not the specific one.

 

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Viraj Jasani
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: HBASE-22760.master.007.patch

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch, 
> HBASE-22760.master.007.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: (was: HBASE-22760.master.007.patch)

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-08-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22760:
-
Attachment: HBASE-22760.master.008.patch

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 1.4.11
>
> Attachments: HBASE-22760.master.003.patch, 
> HBASE-22760.master.004.patch, HBASE-22760.master.005.patch, 
> HBASE-22760.master.008.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22911) fewer concurrent github PR builds

2019-08-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914978#comment-16914978
 ] 

Viraj Jasani commented on HBASE-22911:
--

disableConcurrentBuilds() is not going to have any impact on precommit build 
jobs: [https://builds.apache.org/job/PreCommit-HBASE-Build/] correct? If 
multiple JIRAs have patches uploaded around same time, they can have their 
builds run concurrently if I am not mistaken?

> fewer concurrent github PR builds
> -
>
> Key: HBASE-22911
> URL: https://issues.apache.org/jira/browse/HBASE-22911
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 2.1.6, 1.3.6, 1.4.11, 2.0.7
>
> Attachments: HBASE-22911.0.patch
>
>
> we've been regularly getting 4-5 concurrent builds of PRs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22911) fewer concurrent github PR builds

2019-08-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914978#comment-16914978
 ] 

Viraj Jasani edited comment on HBASE-22911 at 8/24/19 3:07 PM:
---

disableConcurrentBuilds() is not going to have any impact on precommit build 
jobs: [https://builds.apache.org/job/PreCommit-HBASE-Build/] correct? If 
multiple JIRAs have patches uploaded around same time, they can have their 
builds run concurrently if I am not mistaken?

Only single PR's multiple executions will not happen in parallel.


was (Author: vjasani):
disableConcurrentBuilds() is not going to have any impact on precommit build 
jobs: [https://builds.apache.org/job/PreCommit-HBASE-Build/] correct? If 
multiple JIRAs have patches uploaded around same time, they can have their 
builds run concurrently if I am not mistaken?

> fewer concurrent github PR builds
> -
>
> Key: HBASE-22911
> URL: https://issues.apache.org/jira/browse/HBASE-22911
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 2.1.6, 1.3.6, 1.4.11, 2.0.7
>
> Attachments: HBASE-22911.0.patch
>
>
> we've been regularly getting 4-5 concurrent builds of PRs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22911) fewer concurrent github PR builds

2019-08-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914978#comment-16914978
 ] 

Viraj Jasani edited comment on HBASE-22911 at 8/24/19 3:14 PM:
---

disableConcurrentBuilds() is not going to have any impact on precommit build 
jobs: [https://builds.apache.org/job/PreCommit-HBASE-Build/] correct? If 
multiple JIRAs have patches uploaded around same time, they can have their 
builds run concurrently if I am not mistaken?

Only concurrent executions of PRs will not happen.


was (Author: vjasani):
disableConcurrentBuilds() is not going to have any impact on precommit build 
jobs: [https://builds.apache.org/job/PreCommit-HBASE-Build/] correct? If 
multiple JIRAs have patches uploaded around same time, they can have their 
builds run concurrently if I am not mistaken?

Only single PR's multiple executions will not happen in parallel.

> fewer concurrent github PR builds
> -
>
> Key: HBASE-22911
> URL: https://issues.apache.org/jira/browse/HBASE-22911
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.3.0, 2.2.1, 2.1.6, 1.3.6, 1.4.11, 2.0.7
>
> Attachments: HBASE-22911.0.patch
>
>
> we've been regularly getting 4-5 concurrent builds of PRs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22903) alter_status command is broken

2019-08-25 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22903:
-
Fix Version/s: 3.0.0
   Status: Patch Available  (was: In Progress)

> alter_status command is broken
> --
>
> Key: HBASE-22903
> URL: https://issues.apache.org/jira/browse/HBASE-22903
> Project: HBase
>  Issue Type: Bug
>  Components: Admin, shell
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22903.master.000.patch
>
>
> This is applicable to master branch only:
> {code:java}
> > alter_status 't1'
> ERROR: undefined method `getAlterStatus' for 
> #
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22903) alter_status command is broken

2019-08-25 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22903:
-
Attachment: HBASE-22903.master.000.patch

> alter_status command is broken
> --
>
> Key: HBASE-22903
> URL: https://issues.apache.org/jira/browse/HBASE-22903
> Project: HBase
>  Issue Type: Bug
>  Components: Admin, shell
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: HBASE-22903.master.000.patch
>
>
> This is applicable to master branch only:
> {code:java}
> > alter_status 't1'
> ERROR: undefined method `getAlterStatus' for 
> #
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


  1   2   3   4   5   6   7   8   9   10   >