[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015219#comment-16015219
 ] 

Guanghao Zhang commented on HBASE-15616:


Attach a v3 patch. I add a ut to make sure null qualifier will be allowed by 
increment/append/checkAnd* operation. For put, it create a cell for each column 
and value. And when use qualifier, copy it from the byte array. Because the 
qualifier length is 0, so it set a new byte[0] for null qualifier. The v3 patch 
use new byte[0] for increment/checkAnd*'s null qualifier, too.

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for 

[jira] [Assigned] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-15616:
--

Assignee: Guanghao Zhang  (was: Jianwei Cui)

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-15616:
---
Attachment: HBASE-15616-v3.patch

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015159#comment-16015159
 ] 

Chia-Ping Tsai commented on HBASE-18019:


bq. consider using "close" instead of clear since we close all the memstore 
scanners.
Good point. Will commit patch with this suggestion.

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch, HBASE-18019.v1.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015116#comment-16015116
 ] 

Zheng Hu commented on HBASE-18003:
--

For flaky TestAsyncTableAdminApi,   the 
log(https://builds.apache.org/job/HBASE-Flaky-Tests/16193/consoleText) said: 

{code}
Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 118.89 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.client.TestAsyncTableAdminApi
testListTables(org.apache.hadoop.hbase.client.TestAsyncTableAdminApi)  Time 
elapsed: 7.507 sec  <<< ERROR!
org.apache.hadoop.hbase.TableExistsException: testListTables3
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:232)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:90)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:54)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:153)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:741)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1334)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1130)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1596)
{code}

It seems that we created testListTables3 table twice, but I did not found out 
why .   
FYI [~zghaobac].

> Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have a up limit

2017-05-17 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015106#comment-16015106
 ] 

Allan Yang commented on HBASE-18058:


Sure, I will add zookeeper.recovery.retry.maxsleeptime to hbase-defaults.xml 
and add some description,  Thanks, [~apurtell] and [~carp84].

> Zookeeper retry sleep time should have a up limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17907) [C++] ScanCallerBuilder

2017-05-17 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17907:
--
Attachment: hbase-scan-v0.patch

This is a more complete end-to-end scan patch. The plan was to bring this in 
pieces, but I ended up with a whole patch. Let me see whether I can chop this 
up now. 

The design is based on the async client scanner from async Java implementation.

The scan patch may need a couple more unit tests, but it is working end-to-end. 
 

> [C++] ScanCallerBuilder
> ---
>
> Key: HBASE-17907
> URL: https://issues.apache.org/jira/browse/HBASE-17907
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Fix For: HBASE-14850
>
> Attachments: hbase-17907-v1.patch, hbase-scan-v0.patch
>
>
> ScanCallerBuilder, related classes and changes needed for scan support. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015093#comment-16015093
 ] 

Zheng Hu commented on HBASE-18003:
--

TestAsyncRegionAdminApi is a Flaky Test too.

> Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18003:
-
Summary: Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi  
(was: Fix flaky test TestAsyncTableAdminApi)

> Fix flaky test TestAsyncTableAdminApi and TestAsyncRegionAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18001) Extend the "count" shell command to support specified conditions

2017-05-17 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015066#comment-16015066
 ] 

Guangxu Cheng commented on HBASE-18001:
---

UT looks good. [~chia7712] [~jmspaggi] Any other concerns? Thanks.

> Extend the "count" shell command to support specified conditions
> 
>
> Key: HBASE-18001
> URL: https://issues.apache.org/jira/browse/HBASE-18001
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18001-v1.patch, HBASE-18001-v2.patch, 
> HBASE-18001-v3.patch
>
>
> shell command "count" can only count the number of rows in a table.
> And, it could not count the number of the specified conditions.
> Can we allow users to specified conditions like command "scan"?
> In that case, we can count the number of rows under any conditions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015060#comment-16015060
 ] 

Zheng Hu edited comment on HBASE-18068 at 5/18/17 1:48 AM:
---

[~appy],  TestAsyncSnapshotAdminApi is written by me, and after checking the 
code, I found the problem:   a snapshot restore may be still in progress even 
though async admin.restoreSnapshot(..).get(),  because it just submit a 
procedure and get the procedure Id returned (ut can pass in local desktop  
because restore is so quick).   


was (Author: openinx):
[~appy],  TestAsyncSnapshotAdminApi is written by me, and after checking the 
code, I found the problem:   a snapshot restore may be still in progress even 
though async admin.restoreSnapshot(..).get(),  because it just submit a 
procedure and get the procedure Id returned (ut can pass in local desktop  
because restore is so quick).  So , let me fix the bug. 

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> 

[jira] [Commented] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015064#comment-16015064
 ] 

Zheng Hu commented on HBASE-18068:
--

If you have a fix for it , then go ahead please :) 

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> {noformat}
> https://builds.apache.org/job/HBASE-Flaky-Tests/16152/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015060#comment-16015060
 ] 

Zheng Hu commented on HBASE-18068:
--

[~appy],  TestAsyncSnapshotAdminApi is written by me, and after checking the 
code, I found the problem:   a snapshot restore may be still in progress even 
though async admin.restoreSnapshot(..).get(),  because it just submit a 
procedure and get the procedure Id returned (ut can pass in local desktop  
because restore is so quick).  So , let me fix the bug. 

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> {noformat}
> 

[jira] [Commented] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-05-17 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015036#comment-16015036
 ] 

Guanghao Zhang commented on HBASE-14247:


[~aertoria] I already have a patch for master and our 0.98 branch. But it need 
some code rebase work. I will upload the patch later. Thanks.

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-05-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-14247:
---
Status: In Progress  (was: Patch Available)

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-05-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-14247:
--

Assignee: Guanghao Zhang  (was: Liu Shaohui)

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014968#comment-16014968
 ] 

Andrew Purtell commented on HBASE-17959:


No strong opinion, thanks [~ckulkarni]. Please proceed. Thanks for considering 
the logging changes. 

> Canary timeout should be configurable on a per-table basis
> --
>
> Key: HBASE-17959
> URL: https://issues.apache.org/jira/browse/HBASE-17959
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Minor
> Attachments: HBASE-17959.patch
>
>
> The Canary read and write timeouts should be configurable on a per-table 
> basis, for cases where different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-05-17 Thread Chinmay Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014961#comment-16014961
 ] 

Chinmay Kulkarni commented on HBASE-17959:
--

[~apurtell]
Thanks for your comments! I have a follow-up question:
 
I chose to use a   _HashMap_  instead of a  
_ConcurrentHashMap_  since we don't really need hashmap bucket-level locking, 
as we are only concurrently modifying the values corresponding to keys in the 
hashmap. 
Using a _ConcurrentHashMap_ could lead to an unnecessarily increased locking 
granularity since multiple tables (the String keys) could be in the same 
bucket. Synchronizing access to the whole map itself would lead to even more 
lock contention.
What are your views on this?

I will change the logging message for the actual and configured timeouts.
Thanks.

> Canary timeout should be configurable on a per-table basis
> --
>
> Key: HBASE-17959
> URL: https://issues.apache.org/jira/browse/HBASE-17959
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Minor
> Attachments: HBASE-17959.patch
>
>
> The Canary read and write timeouts should be configurable on a per-table 
> basis, for cases where different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have a up limit

2017-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014953#comment-16014953
 ] 

Andrew Purtell commented on HBASE-18058:


Mostly lgtm but please add zookeeper.recovery.retry.maxsleeptime to 
hbase-defaults.xml and/or update the book to document the setting's 
availability and default. 


> Zookeeper retry sleep time should have a up limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18054) log when we add/remove failed servers in client

2017-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014950#comment-16014950
 ] 

Andrew Purtell commented on HBASE-18054:


Is all logging that mentions the failed server list at DEBUG? If not then INFO 
would be better perhaps. 

> log when we add/remove failed servers in client
> ---
>
> Key: HBASE-18054
> URL: https://issues.apache.org/jira/browse/HBASE-18054
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Operability
>Affects Versions: 2.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0
>Reporter: Sean Busbey
>
> Currently we log if a server is in the failed server list when we go to 
> connect to it, but we don't log anything about when the server got into the 
> list.
> This means we have to search the log for errors involving the same server 
> name that (hopefully) managed to get into the log within 
> {{FAILED_SERVER_EXPIRY_KEY}} milliseconds earlier (default 2 seconds).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-17 Thread Appy (JIRA)
Appy created HBASE-18068:


 Summary: Fix flaky test TestAsyncSnapshotAdminApi
 Key: HBASE-18068
 URL: https://issues.apache.org/jira/browse/HBASE-18068
 Project: HBase
  Issue Type: Sub-task
Reporter: Appy
Assignee: Appy


Test failures
{noformat}
org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot

org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
progress on the table=testRestoreSnapshot
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
 at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
 at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
 at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
 at 
org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)



org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots

org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are already 
running another snapshot on the same table { ss=snapshotName1 
table=testDeleteSnapshots type=FLUSH }
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
 at 
org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)



org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots

org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
'snapshotName2' doesn't exist on the filesystem
 at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
 at 
org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)

{noformat}
https://builds.apache.org/job/HBASE-Flaky-Tests/16152/




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi

2017-05-17 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014927#comment-16014927
 ] 

Appy commented on HBASE-18003:
--

Limiting this jira to just TestAsyncTableAdminApi. Creating new jira for 
TestAsyncSnapshotAdminApi since i have a fix for it.

> Fix flaky test TestAsyncTableAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi

2017-05-17 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-18003:
-
Summary: Fix flaky test TestAsyncTableAdminApi  (was: Fix flaky test 
TestAsyncTableAdminApi and TestAsyncSnapshotAdminApi)

> Fix flaky test TestAsyncTableAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18067) Support a default converter for data read shell commands

2017-05-17 Thread Josh Elser (JIRA)
Josh Elser created HBASE-18067:
--

 Summary: Support a default converter for data read shell commands
 Key: HBASE-18067
 URL: https://issues.apache.org/jira/browse/HBASE-18067
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Minor
 Fix For: 2.0.0


The {{get}} and {{scan}} shell commands have the ability to specify some 
complicated syntax on how to encode the bytes read from HBase on a per-column 
basis. By default, bytes falling outside of a limited range of ASCII are just 
printed as hex.

It seems like the intent of these converts was to support conversion of certain 
numeric columns as a readable string (e.g. 1234).

However, if non-ascii encoded bytes are stored in the table (e.g. UTF-8 encoded 
bytes), we may want to treat all data we read as UTF-8 instead (e.g. if 
row+column+value are in Chinese). It would be onerous to require users to 
enumerate every column they're reading to parse as UTF-8 instead of the limited 
ascii range. We can provide an option to encode all values retrieved by the 
command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-05-17 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014560#comment-16014560
 ] 

Ethan Wang commented on HBASE-14247:


[~zghaobac] Are you planning working on this change? 

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18016) Implement abort for TruncateTableProcedure

2017-05-17 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014428#comment-16014428
 ] 

Umesh Agashe commented on HBASE-18016:
--

Thanks [~stack] for reviewing and committing the patch.

> Implement abort for TruncateTableProcedure
> --
>
> Key: HBASE-18016
> URL: https://issues.apache.org/jira/browse/HBASE-18016
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18016.master.001.patch
>
>
> TruncateTableProcedure can not be aborted as abort is not implemented.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18018) Support abort for all procedures by default

2017-05-17 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014427#comment-16014427
 ] 

Umesh Agashe commented on HBASE-18018:
--

Thanks [~stack] for reviewing and committing the patch. I have added a release 
note.

> Support abort for all procedures by default
> ---
>
> Key: HBASE-18018
> URL: https://issues.apache.org/jira/browse/HBASE-18018
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18018.001.patch, HBASE-18018.master.001.patch, 
> HBASE-18018.master.002.patch, HBASE-18018.master.003.patch
>
>
> Changes the default behavior of StateMachineProcedure to support aborting all 
> procedures even if rollback is not supported. On abort, procedure is treated 
> as failed and rollback is called but for procedures which cannot be rolled 
> back abort is ignored currently. This sometime causes procedure to get stuck 
> in waiting state forever. User should have an option to abort any stuck 
> procedure and clean up manually. Please refer to HBASE-18016 and discussion 
> there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18018) Support abort for all procedures by default

2017-05-17 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18018:
-
Release Note: The default behavior for abort() method of 
StateMachineProcedure class is changed to support aborting all procedures 
irrespective of if procedure supports rollback or not.

> Support abort for all procedures by default
> ---
>
> Key: HBASE-18018
> URL: https://issues.apache.org/jira/browse/HBASE-18018
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18018.001.patch, HBASE-18018.master.001.patch, 
> HBASE-18018.master.002.patch, HBASE-18018.master.003.patch
>
>
> Changes the default behavior of StateMachineProcedure to support aborting all 
> procedures even if rollback is not supported. On abort, procedure is treated 
> as failed and rollback is called but for procedures which cannot be rolled 
> back abort is ignored currently. This sometime causes procedure to get stuck 
> in waiting state forever. User should have an option to abort any stuck 
> procedure and clean up manually. Please refer to HBASE-18016 and discussion 
> there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014384#comment-16014384
 ] 

Hadoop QA commented on HBASE-11013:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 32m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
37s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 48s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 32m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 19s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 185m 13s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 47s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 58s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
46s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 376m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (HBASE-18001) Extend the "count" shell command to support specified conditions

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014283#comment-16014283
 ] 

Hadoop QA commented on HBASE-18001:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 11s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868543/HBASE-18001-v3.patch |
| JIRA Issue | HBASE-18001 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux a3ad9ca0b078 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 62d7323 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6825/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6825/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Extend the "count" shell command to support specified conditions
> 
>
> Key: HBASE-18001
> URL: https://issues.apache.org/jira/browse/HBASE-18001
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18001-v1.patch, HBASE-18001-v2.patch, 
> HBASE-18001-v3.patch
>
>
> shell command "count" can only count the number of rows in a table.
> And, it could not count the number of the specified conditions.
> Can we allow users to specified conditions like command "scan"?
> In that case, we can count the number of rows under any conditions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014277#comment-16014277
 ] 

Hadoop QA commented on HBASE-18019:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 36s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.TestZooKeeper |
|   | org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable |
|   | org.apache.hadoop.hbase.TestAcidGuarantees |
|   | org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868522/HBASE-18019.v1.patch |
| JIRA Issue | HBASE-18019 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 80f8af90cf83 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 62d7323 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6824/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6824/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6824/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6824/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message 

[jira] [Commented] (HBASE-18001) Extend the "count" shell command to support specified conditions

2017-05-17 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014162#comment-16014162
 ] 

Guangxu Cheng commented on HBASE-18001:
---

v3: Add a unit test.

> Extend the "count" shell command to support specified conditions
> 
>
> Key: HBASE-18001
> URL: https://issues.apache.org/jira/browse/HBASE-18001
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18001-v1.patch, HBASE-18001-v2.patch, 
> HBASE-18001-v3.patch
>
>
> shell command "count" can only count the number of rows in a table.
> And, it could not count the number of the specified conditions.
> Can we allow users to specified conditions like command "scan"?
> In that case, we can count the number of rows under any conditions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18001) Extend the "count" shell command to support specified conditions

2017-05-17 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-18001:
--
Attachment: HBASE-18001-v3.patch

> Extend the "count" shell command to support specified conditions
> 
>
> Key: HBASE-18001
> URL: https://issues.apache.org/jira/browse/HBASE-18001
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18001-v1.patch, HBASE-18001-v2.patch, 
> HBASE-18001-v3.patch
>
>
> shell command "count" can only count the number of rows in a table.
> And, it could not count the number of the specified conditions.
> Can we allow users to specified conditions like command "scan"?
> In that case, we can count the number of rows under any conditions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014129#comment-16014129
 ] 

Ted Yu commented on HBASE-18019:


Suggestion on the subject of this JIRA: consider using "close" instead of clear 
since we close all the memstore scanners.

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch, HBASE-18019.v1.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014084#comment-16014084
 ] 

Chia-Ping Tsai commented on HBASE-15616:


bq. The increment should support null qualifier too
+1. I had raised a issue HBASE-16346 about the null qualifier in Increment but 
i close it as "Won't Fix "... I would like to make all op 
Put/Get/Scan/Append/Increment get rid of weird restriction.

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are 

[jira] [Updated] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18019:
---
Status: Patch Available  (was: Open)

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch, HBASE-18019.v1.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014036#comment-16014036
 ] 

Chia-Ping Tsai commented on HBASE-18019:


Will commit it if the QA is fine

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch, HBASE-18019.v1.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18019:
---
Attachment: HBASE-18019.v1.patch

v1 --
# change the initial capacity of memStoreScannersAfterFlush from 2 to 3

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch, HBASE-18019.v1.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18019) Clear redundant memstore scanners

2017-05-17 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18019:
---
Status: Open  (was: Patch Available)

> Clear redundant memstore scanners
> -
>
> Key: HBASE-18019
> URL: https://issues.apache.org/jira/browse/HBASE-18019
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-18019.v0.patch
>
>
> The HBASE-17655 remove the MemStoreScanner and it causes that the 
> MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of 
> active, snapshot and pipeline. But StoreScanner only remove one mem scanner 
> when refreshing current scanners.
> {code}
> for (int i = 0; i < currentScanners.size(); i++) {
>   if (!currentScanners.get(i).isFileScanner()) {
> currentScanners.remove(i);
> break;
>   }
> }
> {code}
> The older scanners kept in the StoreScanner will hinder GC from releasing 
> memory and lead to multiple scans on the same data.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18059) The scanner order for memstore scanners are wrong

2017-05-17 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013950#comment-16013950
 ] 

Jingyun Tian commented on HBASE-18059:
--

I will try to fix this and add a UT to it. 

> The scanner order for memstore scanners are wrong
> -
>
> Key: HBASE-18059
> URL: https://issues.apache.org/jira/browse/HBASE-18059
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
> Fix For: 2.0.0
>
>
> This is comments for KeyValueScanner.getScannerOrder
> {code:title=KeyValueScanner.java}
>   /**
>* Get the order of this KeyValueScanner. This is only relevant for 
> StoreFileScanners and
>* MemStoreScanners (other scanners simply return 0). This is required for 
> comparing multiple
>* files to find out which one has the latest data. StoreFileScanners are 
> ordered from 0
>* (oldest) to newest in increasing order. MemStoreScanner gets LONG.max 
> since it always
>* contains freshest data.
>*/
>   long getScannerOrder();
> {code}
> As now we may have multiple memstore scanners, I think the right way to 
> select scanner order for memstore scanner is to ordered from Long.MAX_VALUE 
> in decreasing order.
> But in CompactingMemStore and DefaultMemStore, the scanner order for memstore 
> scanner is also start from 0, which will be messed up with StoreFileScanners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-18059) The scanner order for memstore scanners are wrong

2017-05-17 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian reassigned HBASE-18059:


Assignee: Jingyun Tian

> The scanner order for memstore scanners are wrong
> -
>
> Key: HBASE-18059
> URL: https://issues.apache.org/jira/browse/HBASE-18059
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
> Fix For: 2.0.0
>
>
> This is comments for KeyValueScanner.getScannerOrder
> {code:title=KeyValueScanner.java}
>   /**
>* Get the order of this KeyValueScanner. This is only relevant for 
> StoreFileScanners and
>* MemStoreScanners (other scanners simply return 0). This is required for 
> comparing multiple
>* files to find out which one has the latest data. StoreFileScanners are 
> ordered from 0
>* (oldest) to newest in increasing order. MemStoreScanner gets LONG.max 
> since it always
>* contains freshest data.
>*/
>   long getScannerOrder();
> {code}
> As now we may have multiple memstore scanners, I think the right way to 
> select scanner order for memstore scanner is to ordered from Long.MAX_VALUE 
> in decreasing order.
> But in CompactingMemStore and DefaultMemStore, the scanner order for memstore 
> scanner is also start from 0, which will be messed up with StoreFileScanners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18056) Change CompactingMemStore in BASIC mode to merge multiple segments in pipeline

2017-05-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013855#comment-16013855
 ] 

Anoop Sam John commented on HBASE-18056:


bq. It reduces GC because the segments memory is garbage collected gradually 
and not all together in time of flush. 
Sorry am not getting this point. Here as per the proposed change, say 2 or more 
segments in pipeline getting compacted together into a larger one right?  So 2 
or more segments as such going away but a new bigger one is still there right?  
What is the memory getting GCed here? I may be missing some thing

> Change CompactingMemStore in BASIC mode to merge multiple segments in pipeline
> --
>
> Key: HBASE-18056
> URL: https://issues.apache.org/jira/browse/HBASE-18056
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> Under HBASE-16417 it was decided that CompactingMemStore in BASIC mode should 
> merge multiple ImmutableSegments in CompactionPipeline. Basic+Merge actually 
> demonstrated reduction in GC, alongside improvement in other metrics.
> However, the limit on the number of segments in pipeline is still set to 30. 
> Under this JIRA it should be changed to 1, as it was tested under HBASE-16417.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions

2017-05-17 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-11013:
-
Attachment: HBASE-11013.v5.patch

Patch v5 for master branch:
1. add javadoc for ShadedAccessControlUtil & shaded AccessControl.proto; 
2. move static restoreSnapshotAcl() method to RestoreSnapshotHelper class.
3. Introduce a new enum CLONE_SNAPHOST_RESTORE_ACL/RESTORE_SNAPSHOT_RESTORE_ACL 
for restore/clone_snapshot procedure state machine.

> Clone Snapshots on Secure Cluster Should provide option to apply Retained 
> User Permissions
> --
>
> Key: HBASE-11013
> URL: https://issues.apache.org/jira/browse/HBASE-11013
> Project: HBase
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Ted Yu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-11013.branch-1.1.v1.patch, 
> HBASE-11013.branch-1.2.v1.patch, HBASE-11013.branch-1.3.v1.patch, 
> HBASE-11013.branch-1.v1.patch, HBASE-11013.master.addendum.patch, 
> HBASE-11013.v1.patch, HBASE-11013.v2.patch, HBASE-11013.v3.patch, 
> HBASE-11013.v4.patch, HBASE-11013.v5.patch
>
>
> Currently,
> {code}
> sudo su - test_user
> create 't1', 'f1'
> sudo su - hbase
> snapshot 't1', 'snap_one'
> clone_snapshot 'snap_one', 't2'
> {code}
> In this scenario the user - test_user would not have permissions for the 
> clone table t2.
> We need to add improvement feature such that the permissions of the original 
> table are recorded in snapshot metadata and an option is provided for 
> applying them to the new table as part of the clone process.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18056) Change CompactingMemStore in BASIC mode to merge multiple segments in pipeline

2017-05-17 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013797#comment-16013797
 ] 

Anastasia Braginsky commented on HBASE-18056:
-

[~anoop.hbase], we have already done performance tests and we saw merge makes 
situation better. What is the JIRA number where you explain the problems you 
see? Why do you think the merge will make situation worse there? The merge is 
of CellArrayMaps only and is not related to MSLAB at all.

bq. Reduction in GC when there are duplicated data right? PE kind of work load 
where each of the data is diff key, how come this merge will reduce GC?

No, without duplicates. It reduces GC because the segments memory is garbage 
collected gradually and not all together in time of flush. But the performance 
numbers speak by themselves.

> Change CompactingMemStore in BASIC mode to merge multiple segments in pipeline
> --
>
> Key: HBASE-18056
> URL: https://issues.apache.org/jira/browse/HBASE-18056
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> Under HBASE-16417 it was decided that CompactingMemStore in BASIC mode should 
> merge multiple ImmutableSegments in CompactionPipeline. Basic+Merge actually 
> demonstrated reduction in GC, alongside improvement in other metrics.
> However, the limit on the number of segments in pipeline is still set to 30. 
> Under this JIRA it should be changed to 1, as it was tested under HBASE-16417.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013768#comment-16013768
 ] 

Guanghao Zhang commented on HBASE-15616:


Yes. The increment should support null qualifier too. [~cuijianwei] Do you want 
prepare a patch for this? If you don't have time, I can help to do it. :)

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17850) Backup system restore /repair utility

2017-05-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013749#comment-16013749
 ] 

Ted Yu commented on HBASE-17850:


>From QA run, TestFullBackupWithFailures was shown taking 26 seconds.
Considering we test fall failure stages now, the duration seems short.

Please run the new tests locally and post results here.

> Backup system restore /repair utility
> -
>
> Key: HBASE-17850
> URL: https://issues.apache.org/jira/browse/HBASE-17850
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-17850-v2.patch, HBASE-17850-v3.patch
>
>
> Backup repair tool restores integrity of a backup system table and removes 
> artefacts of a failed backup session from a file system(s)
> This is command-line tool. To run the tool:
> {code}
> hbase backup repair
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013743#comment-16013743
 ] 

Hadoop QA commented on HBASE-15616:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} HBASE-15616 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12797905/HBASE-15616-v2.patch |
| JIRA Issue | HBASE-15616 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6822/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> 

[jira] [Commented] (HBASE-18038) Rename StoreFile to HStoreFile and add a StoreFile interface for CP

2017-05-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013730#comment-16013730
 ] 

Duo Zhang commented on HBASE-18038:
---

Any comments?
Thanks.

> Rename StoreFile to HStoreFile and add a StoreFile interface for CP
> ---
>
> Key: HBASE-18038
> URL: https://issues.apache.org/jira/browse/HBASE-18038
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18038.patch, HBASE-18038-v1.patch, 
> HBASE-18038-v1.patch, HBASE-18038-v2.patch, HBASE-18038-v3.patch, 
> HBASE-18038-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013727#comment-16013727
 ] 

Anoop Sam John commented on HBASE-15616:


So here the fix is only for CheckAndMutate.  Still in comments we mention abt 
increment where there is one way to pass null as qualifier but will result in 
issue.  Other 2 ways , client side itself throw Exception. So would be good to 
change the behave of increment (and others) also to make in sync with Put APIs? 
 As such for this issue +1

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to 

[jira] [Commented] (HBASE-18053) AsyncTableResultScanner will hang when scan wrong column family

2017-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013713#comment-16013713
 ] 

Hudson commented on HBASE-18053:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3025 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3025/])
HBASE-18053 AsyncTableResultScanner will hang when scan wrong column (zghao: 
rev 62d73230234a524d437b84f3446944fd183cc2cb)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/AbstractTestAsyncTableScan.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncTableResultScanner.java


> AsyncTableResultScanner will hang when scan wrong column family
> ---
>
> Key: HBASE-18053
> URL: https://issues.apache.org/jira/browse/HBASE-18053
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18053.patch, HBASE-18053-v1.patch
>
>
> AsynTableResultScanner did not call notify(). So the next will hang on 
> wait(). It is easy to fix. And will add a ut for this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18016) Implement abort for TruncateTableProcedure

2017-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013712#comment-16013712
 ] 

Hudson commented on HBASE-18016:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3025 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3025/])
HBASE-18016 Changes to inherit default behavior of abort from (stack: rev 
c1b45a2c45f3bfeb2ec43e395cc2722975bfe39c)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java


> Implement abort for TruncateTableProcedure
> --
>
> Key: HBASE-18016
> URL: https://issues.apache.org/jira/browse/HBASE-18016
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18016.master.001.patch
>
>
> TruncateTableProcedure can not be aborted as abort is not implemented.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18018) Support abort for all procedures by default

2017-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013711#comment-16013711
 ] 

Hudson commented on HBASE-18018:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3025 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3025/])
HBASE-18018 Changes to support abort for all procedures by default (stack: rev 
5eb1b7b96c6b870959669636057aa16e93646fa6)
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestStateMachineProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java


> Support abort for all procedures by default
> ---
>
> Key: HBASE-18018
> URL: https://issues.apache.org/jira/browse/HBASE-18018
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18018.001.patch, HBASE-18018.master.001.patch, 
> HBASE-18018.master.002.patch, HBASE-18018.master.003.patch
>
>
> Changes the default behavior of StateMachineProcedure to support aborting all 
> procedures even if rollback is not supported. On abort, procedure is treated 
> as failed and rollback is called but for procedures which cannot be rolled 
> back abort is ignored currently. This sometime causes procedure to get stuck 
> in waiting state forever. User should have an option to abort any stuck 
> procedure and clean up manually. Please refer to HBASE-18016 and discussion 
> there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2017-05-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013702#comment-16013702
 ] 

Ted Yu commented on HBASE-17992:


If possible, please add a test showing that the issue caused by 
TimeoutException has been fixed.

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2017-05-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013699#comment-16013699
 ] 

Ted Yu commented on HBASE-17992:


Please check your formatter - indentation should be two spaces.

Can you create review board request ?

Please fix unit test failure.

Thanks

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) CheckAndMutate will encouter NPE if qualifier to check is null

2017-05-17 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013693#comment-16013693
 ] 

Guanghao Zhang commented on HBASE-15616:


bq. null qualifier is allowed for Put/Get/Scan/Append, users may have used null 
qualifier in these operations, so also need to allow null qualifier for 
checkAndMutate and increment
+1 for this. I met this problem recently. And for put operation, if you pass a 
null qualifier, it will set a new byte[0] to protobuf. So we should keep same 
behavior for checkAndMutate operation, too. [~anoop.hbase] [~stack] Any more 
concerns? Thanks.

> CheckAndMutate will encouter NPE if qualifier to check is null
> --
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if 

[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013691#comment-16013691
 ] 

Hadoop QA commented on HBASE-17992:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 43 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch 18 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 21s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
11s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure.TestProcedure |
|   | hadoop.hbase.procedure.TestProcedureCoordinator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868476/hbase-17992-master.patch
 |
| JIRA Issue | HBASE-17992 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 7198c37b4d55 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 62d7323 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6821/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6821/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6821/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6821/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-18038) Rename StoreFile to HStoreFile and add a StoreFile interface for CP

2017-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013655#comment-16013655
 ] 

Hadoop QA commented on HBASE-18038:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 38 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
56m 52s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 203m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 287m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 
|
|   | hadoop.hbase.client.TestAsyncBalancerAdminApi |
|   | hadoop.hbase.client.TestAsyncTableAdminApi |
|   | hadoop.hbase.client.TestAsyncSnapshotAdminApi |
| Timed out junit tests | 
org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868450/HBASE-18038-v3.patch |
| JIRA Issue | HBASE-18038 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux eb7a39525f48 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c1b45a2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6820/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6820/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6820/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6820/console |
| 

[jira] [Commented] (HBASE-16757) Integrate functionality currently done up as Coprocessor Endpoints into core.

2017-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013642#comment-16013642
 ] 

Jerry He commented on HBASE-16757:
--

Agree.
I misunderstood your previous comment.

> Integrate functionality currently done up as Coprocessor Endpoints into core.
> -
>
> Key: HBASE-16757
> URL: https://issues.apache.org/jira/browse/HBASE-16757
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>
> As part of the work over in HBASE-15638, "Shade Protobufs", I could not but 
> help noticing that of the seven or eight Coprocessor Endpoints bundled with 
> hbase, half should have been converted to be core long time again. In fact, 
> some of these core CPEPs are no longer viable as CPEPs, if they ever were, 
> given how intertwined with core they are.
> For example, MultiRowMutation, the nice CPEP that allows us do cross-row 
> transactions used natively amending hbase:meta, has much of its facility 
> baked into core without which it could not run. In an exercise, I was able to 
> convert this one over without having to alter public APIs in Table or Admin.
> Auth, as pointed out by [~mbertozzi], is not a Coprocessor Endpoint though it 
> is cast as one invoked natively by RPC.
> VisibilityLabels is a CPEP but core types -- Query and Mutation -- actually 
> depend on VisibiltyLabel related classes.
> SecureBulkLoad is not in any violation being a CPEP provided to add API 
> ahead-of-time since properly deprecated and already integrated to core but I 
> mention it here for completeness sake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2017-05-17 Thread Bo Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013630#comment-16013630
 ] 

Bo Cui commented on HBASE-17992:


[~hadoopqa] 
I have updated patchs based on github, including hbase-0.98 and hbase-1.3 and 
hbase-maste

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16757) Integrate functionality currently done up as Coprocessor Endpoints into core.

2017-05-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013626#comment-16013626
 ] 

Duo Zhang commented on HBASE-16757:
---

It would be great if they can be done at the same time. But for things like 
AccessController, the function of CP is much more complicated than EP, so I 
think we can do the EP change first if we do not have enough time to do the CP 
change.

> Integrate functionality currently done up as Coprocessor Endpoints into core.
> -
>
> Key: HBASE-16757
> URL: https://issues.apache.org/jira/browse/HBASE-16757
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>
> As part of the work over in HBASE-15638, "Shade Protobufs", I could not but 
> help noticing that of the seven or eight Coprocessor Endpoints bundled with 
> hbase, half should have been converted to be core long time again. In fact, 
> some of these core CPEPs are no longer viable as CPEPs, if they ever were, 
> given how intertwined with core they are.
> For example, MultiRowMutation, the nice CPEP that allows us do cross-row 
> transactions used natively amending hbase:meta, has much of its facility 
> baked into core without which it could not run. In an exercise, I was able to 
> convert this one over without having to alter public APIs in Table or Admin.
> Auth, as pointed out by [~mbertozzi], is not a Coprocessor Endpoint though it 
> is cast as one invoked natively by RPC.
> VisibilityLabels is a CPEP but core types -- Query and Mutation -- actually 
> depend on VisibiltyLabel related classes.
> SecureBulkLoad is not in any violation being a CPEP provided to add API 
> ahead-of-time since properly deprecated and already integrated to core but I 
> mention it here for completeness sake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2017-05-17 Thread Bo Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Cui updated HBASE-17992:
---
Attachment: hbase-17992-master.patch
hbase-17992-1.3.patch
hbase-17992-0.98.patch

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16757) Integrate functionality currently done up as Coprocessor Endpoints into core.

2017-05-17 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013625#comment-16013625
 ] 

Jerry He commented on HBASE-16757:
--

Do you mean to provide new client API to access the services? Or the service 
side changes?
It can not be done at the same time?
I agree we should do it sooner rather than later for 2.0 to avoid compatibility 
dragging into another major release.

> Integrate functionality currently done up as Coprocessor Endpoints into core.
> -
>
> Key: HBASE-16757
> URL: https://issues.apache.org/jira/browse/HBASE-16757
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>
> As part of the work over in HBASE-15638, "Shade Protobufs", I could not but 
> help noticing that of the seven or eight Coprocessor Endpoints bundled with 
> hbase, half should have been converted to be core long time again. In fact, 
> some of these core CPEPs are no longer viable as CPEPs, if they ever were, 
> given how intertwined with core they are.
> For example, MultiRowMutation, the nice CPEP that allows us do cross-row 
> transactions used natively amending hbase:meta, has much of its facility 
> baked into core without which it could not run. In an exercise, I was able to 
> convert this one over without having to alter public APIs in Table or Admin.
> Auth, as pointed out by [~mbertozzi], is not a Coprocessor Endpoint though it 
> is cast as one invoked natively by RPC.
> VisibilityLabels is a CPEP but core types -- Query and Mutation -- actually 
> depend on VisibiltyLabel related classes.
> SecureBulkLoad is not in any violation being a CPEP provided to add API 
> ahead-of-time since properly deprecated and already integrated to core but I 
> mention it here for completeness sake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have a up limit

2017-05-17 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013607#comment-16013607
 ] 

Yu Li commented on HBASE-18058:
---

I see, interesting case, thanks for sharing. Maybe briefly talking about the 
story in JIRA description is a good idea? Thanks.

> Zookeeper retry sleep time should have a up limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)