[jira] [Updated] (HBASE-18078) [C++] Implement RPC timeout

2017-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-18078:
--
Summary: [C++] Implement RPC timeout  (was: [C++] implement RPC timeout)

> [C++] Implement RPC timeout
> ---
>
> Key: HBASE-18078
> URL: https://issues.apache.org/jira/browse/HBASE-18078
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> RPC layer should handle various connection abnormality (e.g. server aborted 
> connection) through timeout. Ideally, exceptions should be raised and 
> propagated through handlers of pipeline in client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18078) [C++] implement RPC timeout

2017-05-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HBASE-18078:
-

 Summary: [C++] implement RPC timeout
 Key: HBASE-18078
 URL: https://issues.apache.org/jira/browse/HBASE-18078
 Project: HBase
  Issue Type: Sub-task
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


RPC layer should handle various connection abnormality (e.g. server aborted 
connection) through timeout. Ideally, exceptions should be raised and 
propagated through handlers of pipeline in client.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016917#comment-16016917
 ] 

Hadoop QA commented on HBASE-18060:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} @author {color} | {color:red} 0m 0s 
{color} | {color:red} The patch appears to contain 1 @author tags which the 
community has agreed to not allow in code contributions. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 2s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
54s {color} | {color:green} branch-1.3 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-assembly . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hbase-server in branch-1.3 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 55s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 19s {color} 
| {color:red} root-jdk1.8.0_131 with JDK v1.8.0_131 generated 8 new + 15 
unchanged - 0 fixed = 23 total (was 15) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 9s {color} 
| {color:red} hbase-metrics-jdk1.8.0_131 with JDK v1.8.0_131 generated 8 new + 
0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 19s {color} 
| {color:red} root-jdk1.7.0_80 with JDK v1.7.0_80 generated 8 new + 15 
unchanged - 0 fixed = 23 total (was 15) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s {color} 
| {color:red} hbase-metrics-jdk1.7.0_80 with JDK v1.7.0_80 generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 39s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
59s {color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle . hbase-assembly {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
49s {color} 

[jira] [Commented] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016876#comment-16016876
 ] 

Chia-Ping Tsai commented on HBASE-18076:


Nice job. But i dont see anything under the column Trends on my browser Edge. 
Do i miss something? Thanks.

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: dashboard.html, HBASE-18076.master.001.patch, 
> screenshot.png
>
>
> Adds those colored status markers:
> !screenshot.png|width=800px!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18035) Meta replica does not give any primaryOperationTimeout to primary meta region

2017-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016871#comment-16016871
 ] 

Hudson commented on HBASE-18035:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3036 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3036/])
HBASE-18035 Meta replica does not give any primaryOperationTimeout to (tedyu: 
rev 958cd2d1b7b1239925912ce148589eeb8a8dd2bc)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionConfiguration.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java


> Meta replica does not give any primaryOperationTimeout to primary meta region
> -
>
> Key: HBASE-18035
> URL: https://issues.apache.org/jira/browse/HBASE-18035
> Project: HBase
>  Issue Type: Bug
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 18035-unittest.patch, HBASE-18035-master-v001.patch, 
> HBASE-18035-master-v001.patch, HBASE-18035-master-v002.patch
>
>
> I was working on my unittest and it failed with TableNotFoundException. I 
> debugged a bit and found out that for meta scan, it does not give any 
> primaryOperationTimeout to primary meta region. This will be an issue as the 
> meta replica will contain stale data and it is possible that the meta replica 
> will return back first than primary.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java#L823



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016868#comment-16016868
 ] 

Chia-Ping Tsai commented on HBASE-15616:


LGTM. +1

> Allow null qualifier for all table operations
> -
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch, HBASE-15616-v4.patch, HBASE-15616-v5.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15616:
---
Fix Version/s: 2.0.0

> Allow null qualifier for all table operations
> -
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch, HBASE-15616-v4.patch, HBASE-15616-v5.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have an upper limit

2017-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016870#comment-16016870
 ] 

Hudson commented on HBASE-18058:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3036 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3036/])
HBASE-18058 Zookeeper retry sleep time should have an upper limit (Allan 
(tedyu: rev d137991ccc876988ae8832c316457e525f6bf387)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* (edit) hbase-common/src/main/resources/hbase-default.xml


> Zookeeper retry sleep time should have an upper limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058-branch-1.v3.patch, 
> HBASE-18058.patch, HBASE-18058.v2.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.
> A case of damage done by high sleep time:
> If the server hosting zookeeper is disk full, the zookeeper quorum won't 
> really went down but reject all write request. So at HBase side, new zk write 
> request will suffers from exception and retry. But connection remains so the 
> session won't timeout. When disk full situation have been resolved, the 
> zookeeper quorum can work normally again. But the very high sleep time cause 
> some module of RegionServer/HMaster will still sleep for a long time(for 
> example, the balancer) before working.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-18 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016860#comment-16016860
 ] 

Zheng Hu commented on HBASE-18068:
--

You fix this issue please, and I'll fix HBASE-18003. Thanks. 

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> {noformat}
> https://builds.apache.org/job/HBASE-Flaky-Tests/16152/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016858#comment-16016858
 ] 

Anoop Sam John commented on HBASE-15616:


You have it already  :-)
bq.Looks good. Pls change the title and desc of the jira accordingly.


> Allow null qualifier for all table operations
> -
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch, HBASE-15616-v4.patch, HBASE-15616-v5.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016831#comment-16016831
 ] 

Allan Yang commented on HBASE-18074:


HBASE-17924 is another improvement for lock efficiency. 

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17850) Backup system restore /repair utility

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016828#comment-16016828
 ] 

Hadoop QA commented on HBASE-17850:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 118m 1s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 161m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868849/HBASE-17850-v4.patch |
| JIRA Issue | HBASE-17850 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d87213bfbfe1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 958cd2d |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6837/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6837/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Backup system restore /repair utility
> -
>
> Key: HBASE-17850
> URL: https://issues.apache.org/jira/browse/HBASE-17850
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-17850-v2.patch, HBASE-17850-v3.patch, 
> HBASE-17850-v4.patch
>
>
> 

[jira] [Updated] (HBASE-18058) Zookeeper retry sleep time should have an upper limit

2017-05-18 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-18058:
---
Attachment: HBASE-18058-branch-1.v3.patch

> Zookeeper retry sleep time should have an upper limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058-branch-1.v3.patch, 
> HBASE-18058.patch, HBASE-18058.v2.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.
> A case of damage done by high sleep time:
> If the server hosting zookeeper is disk full, the zookeeper quorum won't 
> really went down but reject all write request. So at HBase side, new zk write 
> request will suffers from exception and retry. But connection remains so the 
> session won't timeout. When disk full situation have been resolved, the 
> zookeeper quorum can work normally again. But the very high sleep time cause 
> some module of RegionServer/HMaster will still sleep for a long time(for 
> example, the balancer) before working.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-18076:
-
Description: 
Adds those colored status markers:
!screenshot.png|width=800px!

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: dashboard.html, HBASE-18076.master.001.patch, 
> screenshot.png
>
>
> Adds those colored status markers:
> !screenshot.png|width=800px!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-18076:
-
Attachment: screenshot.png

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: dashboard.html, HBASE-18076.master.001.patch, 
> screenshot.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18075) Support namespaces and tables with non-latin alphabetical characters

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016806#comment-16016806
 ] 

Hadoop QA commented on HBASE-18075:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 124m 44s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
58s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 185m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868840/HBASE-18075.001.patch 
|
| JIRA Issue | HBASE-18075 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f1c4db8bc107 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 958cd2d |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835/testReport/ |
| modules | C: hbase-common hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835/console |
| 

[jira] [Resolved] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy resolved HBASE-18076.
--
Resolution: Fixed

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: dashboard.html, HBASE-18076.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16196) Update jruby to a newer version.

2017-05-18 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016804#comment-16016804
 ] 

Mike Drob commented on HBASE-16196:
---

We should do the fix for HBASE-18077 before we apply this one since that one 
will need to go to multiple branches and applying it first will make for fewer 
merge conflicts.

> Update jruby to a newer version.
> 
>
> Key: HBASE-16196
> URL: https://issues.apache.org/jira/browse/HBASE-16196
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Elliott Clark
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 0001-Update-to-JRuby-9.1.2.0-and-JLine-2.12.patch, 
> hbase-16196.branch-1.patch, hbase-16196.v2.branch-1.patch, 
> hbase-16196.v3.branch-1.patch, hbase-16196.v4.branch-1.patch, 
> HBASE-16196.v5.patch, HBASE-16196.v6.patch, HBASE-16196.v7.patch
>
>
> Ruby 1.8.7 is no longer maintained.
> The TTY library in the old jruby is bad. The newer one is less bad.
> Since this is only a dependency on the hbase-shell module and not on 
> hbase-client or hbase-server this should be a pretty simple thing that 
> doesn't have any backwards compat issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18077) Update JUnit license from EPL to CPL

2017-05-18 Thread Mike Drob (JIRA)
Mike Drob created HBASE-18077:
-

 Summary: Update JUnit license from EPL to CPL
 Key: HBASE-18077
 URL: https://issues.apache.org/jira/browse/HBASE-18077
 Project: HBase
  Issue Type: Bug
  Components: build, community
Reporter: Mike Drob
Priority: Blocker
 Fix For: 2.0.0


JUnit is listed as using the CPL, but it actually uses the EPL. We need to 
update our LICENSE file in the shaded jars.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-18077) Update JUnit license from EPL to CPL

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned HBASE-18077:
-

Assignee: Mike Drob

> Update JUnit license from EPL to CPL
> 
>
> Key: HBASE-18077
> URL: https://issues.apache.org/jira/browse/HBASE-18077
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
>
> JUnit is listed as using the CPL, but it actually uses the EPL. We need to 
> update our LICENSE file in the shaded jars.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18069) Fix flaky test TestReplicationAdminWithClusters#testDisableAndEnableReplication

2017-05-18 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016798#comment-16016798
 ] 

Guanghao Zhang commented on HBASE-18069:


Take a look about HBASE-13057. The enableTableReplication will sink the table 
schema change to peer cluster. But the disableTableReplication don't sink it to 
peer cluster. So it is a mistake only for unit test. 

> Fix flaky test 
> TestReplicationAdminWithClusters#testDisableAndEnableReplication
> ---
>
> Key: HBASE-18069
> URL: https://issues.apache.org/jira/browse/HBASE-18069
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Chia-Ping Tsai
>Priority: Trivial
>  Labels: beginner
> Fix For: 2.0.0, 1.4.0
>
>
> If we run testDisableAndEnableReplication, we will get the following error 
> message.
> {code}
> testDisableAndEnableReplication(org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters)
>   Time elapsed: 2.046 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters.testDisableAndEnableReplication(TestReplicationAdminWithClusters.java:160)
> {code}
> The critical code is shown below.
> {code}
> admin1.disableTableReplication(tableName);
> HTableDescriptor table = admin1.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> table = admin2.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> {code}
> Is HTD got from admin2 affected by admin1? I don't think so. We should remove 
> the related assertion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16196) Update jruby to a newer version.

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-16196:
--
Attachment: HBASE-16196.v7.patch

v7:
* Use JRuby 9.1.9.0, which has compatible dependancies.
* Update LICENSE and NOTICE for the actual libs.

This still isn't complete because the NOTICE doesn't include the transitive 
NOTICE files from the jruby-complete dependencies. I tried looking at how we do 
it now in the hbase-assembly pom, and I can't come up with any way that we 
would do this without making it extremely hacky.

> Update jruby to a newer version.
> 
>
> Key: HBASE-16196
> URL: https://issues.apache.org/jira/browse/HBASE-16196
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Elliott Clark
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 0001-Update-to-JRuby-9.1.2.0-and-JLine-2.12.patch, 
> hbase-16196.branch-1.patch, hbase-16196.v2.branch-1.patch, 
> hbase-16196.v3.branch-1.patch, hbase-16196.v4.branch-1.patch, 
> HBASE-16196.v5.patch, HBASE-16196.v6.patch, HBASE-16196.v7.patch
>
>
> Ruby 1.8.7 is no longer maintained.
> The TTY library in the old jruby is bad. The newer one is less bad.
> Since this is only a dependency on the hbase-shell module and not on 
> hbase-client or hbase-server this should be a pretty simple thing that 
> doesn't have any backwards compat issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-18076:
-
Attachment: dashboard.html

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: dashboard.html, HBASE-18076.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-18076:
-
Attachment: HBASE-18076.master.001.patch

> Flaky dashboard improvement: Add status markers to show trends of 
> failure/success
> -
>
> Key: HBASE-18076
> URL: https://issues.apache.org/jira/browse/HBASE-18076
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-18076.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18076) Flaky dashboard improvement: Add status markers to show trends of failure/success

2017-05-18 Thread Appy (JIRA)
Appy created HBASE-18076:


 Summary: Flaky dashboard improvement: Add status markers to show 
trends of failure/success
 Key: HBASE-18076
 URL: https://issues.apache.org/jira/browse/HBASE-18076
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016759#comment-16016759
 ] 

Guanghao Zhang commented on HBASE-15616:


Hadoop QA passed.  [~anoop.hbase] [~chia7712] Can I get a +1 for v5 patch? 
Thanks.

> Allow null qualifier for all table operations
> -
>
> Key: HBASE-15616
> URL: https://issues.apache.org/jira/browse/HBASE-15616
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Guanghao Zhang
> Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch, 
> HBASE-15616-v3.patch, HBASE-15616-v4.patch, HBASE-15616-v5.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>   at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>   at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>   ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>   at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>   ... 4 more
> Caused by: java.lang.NullPointerException
>   at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>   at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>   at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>   at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: HBASE-18060.branch-1.3.v1.patch

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.3.v1.patch, 
> HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: (was: HBASE-18060.branch-1.3.v1.patch)

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: HBASE-18060.branch-1.3.v1.patch

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.3.v1.patch, 
> HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Status: Patch Available  (was: Open)

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016734#comment-16016734
 ] 

Vincent Poon commented on HBASE-18060:
--

Put up a review at https://reviews.apache.org/r/59385/

Summary:
Changed java 8 syntax to java 8 equivalent (e.g. lambdas to anonymous inner 
classes)
Wrote simple implementations of java 8 library features (instead of Map's 
'compute', use "putIfAbsent" and AtomicLong)
copied in LongAdder/Striped64 code
We don't have HADOOP-10839 in branch-1, so used reflection to unregister hadoop 
metric sources. Added a unit test for that.

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016724#comment-16016724
 ] 

Hadoop QA commented on HBASE-17286:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hbase-assembly in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hbase-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868846/HBASE-17286.v2.patch |
| JIRA Issue | HBASE-17286 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 2cad1f07dd5b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 958cd2d |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836/testReport/ |
| modules | C: hbase-assembly hbase-shaded U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch, HBASE-17286.v2.patch
>
>
> Noticed this 

[jira] [Commented] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016722#comment-16016722
 ] 

stack commented on HBASE-18074:
---

After more study, the 4-5x row locks is not true either. Total count is same in 
both cases. We were able to skirt out quicker if same thread and lock already 
held in old code.  New locking takes about 1/3rd longer in total time spent 
getting locks (but this seems small cost for fairness and read/write semantic).

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17850) Backup system restore /repair utility

2017-05-18 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17850:
--
Attachment: HBASE-17850-v4.patch

v4 addressed comments on RB

> Backup system restore /repair utility
> -
>
> Key: HBASE-17850
> URL: https://issues.apache.org/jira/browse/HBASE-17850
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-17850-v2.patch, HBASE-17850-v3.patch, 
> HBASE-17850-v4.patch
>
>
> Backup repair tool restores integrity of a backup system table and removes 
> artefacts of a failed backup session from a file system(s)
> This is command-line tool. To run the tool:
> {code}
> hbase backup repair
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18035) Meta replica does not give any primaryOperationTimeout to primary meta region

2017-05-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016676#comment-16016676
 ] 

huaxiang sun commented on HBASE-18035:
--

Thanks [~te...@apache.org], I will work on a branch-1 patch.

> Meta replica does not give any primaryOperationTimeout to primary meta region
> -
>
> Key: HBASE-18035
> URL: https://issues.apache.org/jira/browse/HBASE-18035
> Project: HBase
>  Issue Type: Bug
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 18035-unittest.patch, HBASE-18035-master-v001.patch, 
> HBASE-18035-master-v001.patch, HBASE-18035-master-v002.patch
>
>
> I was working on my unittest and it failed with TableNotFoundException. I 
> debugged a bit and found out that for meta scan, it does not give any 
> primaryOperationTimeout to primary meta region. This will be an issue as the 
> meta replica will contain stale data and it is possible that the meta replica 
> will return back first than primary.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java#L823



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-17286:
--
Status: Patch Available  (was: Open)

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch, HBASE-17286.v2.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-17286:
--
Attachment: HBASE-17286.v2.patch

v2:
- Explicitly invoke the process goal.
- Update the execution id to be more descriptive than "default"
- Also run for the shaded artifacts.

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch, HBASE-17286.v2.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-17286:
--
Status: Open  (was: Patch Available)

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17447) Automatically delete quota when table is deleted

2017-05-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016648#comment-16016648
 ] 

Enis Soztutar commented on HBASE-17447:
---

Sorry to come in late, but this should be a "core" feature rather than 
implemented via a MasterObserver. Everything in the space quotas are already 
core, so I don't think there is any point in making this an external thing to 
be enabled manually by users. Ideally, we should have every new feature to come 
with a single config option to enable. Once that is enabled, everything else is 
configured based on that. 
We are expecting that every user of space quotas to enable this by default. So 
even if we keep the implementation as a coproc, maybe we should just add this 
to the list of coprocessors programmatically at hmaster start without the user 
having to configure this. 
Anyways, this can be done as an easy follow up patch. 

> Automatically delete quota when table is deleted
> 
>
> Key: HBASE-17447
> URL: https://issues.apache.org/jira/browse/HBASE-17447
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: HBASE-16961
>
> Attachments: HBASE-17447.001.HBASE-16961.patch, 
> HBASE-17447.002.HBASE-16961.patch
>
>
> If a table has a space quota defined on it, we can delete that quota when the 
> table is deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17286:


If they all end up the same then I'm +1 on the setting goal version, but we
should give it a descriptive id since "default" was chosen to match the "no
ID given in apache pom 12" case.

On May 18, 2017 4:15 PM, "Mike Drob (JIRA)"  wrote:


[ https://issues.apache.org/jira/browse/HBASE-17286?page=
com.atlassian.jira.plugin.system.issuetabpanels:comment-
tabpanel=16016633#comment-16016633 ]

Mike Drob commented on HBASE-17286:
---

The shaded-* artifacts bundle dependancies, so we should check those as
well.

updated for a patch-in-progress.
assembly:single -DskipTests -Drat.skip -Prelease}} on master is that the
LICENSE.txt file contains only the ASL text (which I know for certain it
should contain BSD and MIT as well).
something happened in master which broke this. Filing this now so we can
try to bisect and figure out what happened.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18075) Support namespaces and tables with non-latin alphabetical characters

2017-05-18 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18075:
---
Status: Patch Available  (was: Open)

> Support namespaces and tables with non-latin alphabetical characters
> 
>
> Key: HBASE-18075
> URL: https://issues.apache.org/jira/browse/HBASE-18075
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-18075.001.patch
>
>
> On the heels of HBASE-18067, it would be nice to support namespaces and 
> tables with names that fall outside of Latin alphabetical characters and 
> numbers.
> Our current regex for allowable characters is approximately 
> {{\[a-zA-Z0-9\]+}}.
> It would be nice to replace {{a-zA-Z}} with Java's {{\p\{IsAlphabetic\}}} 
> which will naturally restrict the unicode character space down to just those 
> that are part of the alphabet for each script (e.g. latin, cyrillic, greek).
> Technically, our possible scope of allowable characters is, best as I can 
> tell, only limited by the limitations of ZooKeeper itself 
> https://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkDataModel
>  (as both table and namespace are created as znodes).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18075) Support namespaces and tables with non-latin alphabetical characters

2017-05-18 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18075:
---
Attachment: HBASE-18075.001.patch

.001 Expand out the allowable table name and namespace characters. Some simple 
tests have worked, will let hadoopqa chew for some more rigor.

> Support namespaces and tables with non-latin alphabetical characters
> 
>
> Key: HBASE-18075
> URL: https://issues.apache.org/jira/browse/HBASE-18075
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-18075.001.patch
>
>
> On the heels of HBASE-18067, it would be nice to support namespaces and 
> tables with names that fall outside of Latin alphabetical characters and 
> numbers.
> Our current regex for allowable characters is approximately 
> {{\[a-zA-Z0-9\]+}}.
> It would be nice to replace {{a-zA-Z}} with Java's {{\p\{IsAlphabetic\}}} 
> which will naturally restrict the unicode character space down to just those 
> that are part of the alphabet for each script (e.g. latin, cyrillic, greek).
> Technically, our possible scope of allowable characters is, best as I can 
> tell, only limited by the limitations of ZooKeeper itself 
> https://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkDataModel
>  (as both table and namespace are created as znodes).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016633#comment-16016633
 ] 

Mike Drob commented on HBASE-17286:
---

The shaded-* artifacts bundle dependancies, so we should check those as well.

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016631#comment-16016631
 ] 

Josh Elser commented on HBASE-17286:


{quote}
bq. Does this still suppress the run of the execution from the parent pom?

 I didn't realize there was the goal of also precluding the execution from the 
parent from running at all
{quote}

LICENSE.txt and NOTICE.txt in the bin-tarball are equivalent with both 
approaches (id value and specifying a goal). Are there other L files we need 
to verify?


> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016616#comment-16016616
 ] 

Josh Elser commented on HBASE-17286:


bq. Does this still suppress the run of the execution from the parent pom?

(assuming you mean my suggestion). Ahh, that would explain why it was that way. 
I didn't realize there was the goal of also precluding the execution from the 
parent from running at all :)

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-18 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016614#comment-16016614
 ] 

Appy commented on HBASE-18068:
--

If you do take it, assign it to yourself so i know that you're working on this, 
and I don't end up doing redundant work. [~openinx]
I'll ping here again when i actually start working on this. (so feel free to 
take anytime before that if you want)

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> {noformat}
> https://builds.apache.org/job/HBASE-Flaky-Tests/16152/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18035) Meta replica does not give any primaryOperationTimeout to primary meta region

2017-05-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18035:
---
Fix Version/s: 2.0.0

Patch doesn't apply to branch-1.

Do you mind producing branch-1 patch ?

Thanks

> Meta replica does not give any primaryOperationTimeout to primary meta region
> -
>
> Key: HBASE-18035
> URL: https://issues.apache.org/jira/browse/HBASE-18035
> Project: HBase
>  Issue Type: Bug
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 18035-unittest.patch, HBASE-18035-master-v001.patch, 
> HBASE-18035-master-v001.patch, HBASE-18035-master-v002.patch
>
>
> I was working on my unittest and it failed with TableNotFoundException. I 
> debugged a bit and found out that for meta scan, it does not give any 
> primaryOperationTimeout to primary meta region. This will be an issue as the 
> meta replica will contain stale data and it is possible that the meta replica 
> will return back first than primary.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java#L823



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016603#comment-16016603
 ] 

Hadoop QA commented on HBASE-17286:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hbase-assembly in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
5s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868830/HBASE-17286.patch |
| JIRA Issue | HBASE-17286 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 8fbccc1db1a2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6dc4190c |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6834/testReport/ |
| modules | C: hbase-assembly U: hbase-assembly |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6834/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which 

[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: HBASE-18060.branch-1.v1.patch

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: HBASE-18026.branch-1.v1.patch

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-05-18 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-18060:
-
Attachment: (was: HBASE-18026.branch-1.v1.patch)

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: HBASE-18060.branch-1.v1.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17286:


Does this still suppress the run of the execution from the parent pom?




> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have an upper limit

2017-05-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016596#comment-16016596
 ] 

Ted Yu commented on HBASE-18058:


Mind updating branch-1 patch with addition to hbase-default.xml ?

> Zookeeper retry sleep time should have an upper limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058.patch, HBASE-18058.v2.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.
> A case of damage done by high sleep time:
> If the server hosting zookeeper is disk full, the zookeeper quorum won't 
> really went down but reject all write request. So at HBase side, new zk write 
> request will suffers from exception and retry. But connection remains so the 
> session won't timeout. When disk full situation have been resolved, the 
> zookeeper quorum can work normally again. But the very high sleep time cause 
> some module of RegionServer/HMaster will still sleep for a long time(for 
> example, the balancer) before working.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17959) Canary timeout should be configurable on a per-table basis

2017-05-18 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated HBASE-17959:
-
Attachment: HBASE-17959.002.patch

Thanks [~apurtell]. Modified the patch to change logging.

> Canary timeout should be configurable on a per-table basis
> --
>
> Key: HBASE-17959
> URL: https://issues.apache.org/jira/browse/HBASE-17959
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Minor
> Attachments: HBASE-17959.002.patch, HBASE-17959.patch
>
>
> The Canary read and write timeouts should be configurable on a per-table 
> basis, for cases where different tables have different latency SLAs. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18058) Zookeeper retry sleep time should have an upper limit

2017-05-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18058:
---
Summary: Zookeeper retry sleep time should have an upper limit  (was: 
Zookeeper retry sleep time should have a up limit)

> Zookeeper retry sleep time should have an upper limit
> -
>
> Key: HBASE-18058
> URL: https://issues.apache.org/jira/browse/HBASE-18058
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-18058-branch-1.patch, 
> HBASE-18058-branch-1.v2.patch, HBASE-18058.patch, HBASE-18058.v2.patch
>
>
> Now, in {{RecoverableZooKeeper}}, the retry backoff sleep time grow 
> exponentially, but it doesn't have any up limit. It directly lead to a long 
> long recovery time after Zookeeper going down for some while and come back.
> A case of damage done by high sleep time:
> If the server hosting zookeeper is disk full, the zookeeper quorum won't 
> really went down but reject all write request. So at HBase side, new zk write 
> request will suffers from exception and retry. But connection remains so the 
> session won't timeout. When disk full situation have been resolved, the 
> zookeeper quorum can work normally again. But the very high sleep time cause 
> some module of RegionServer/HMaster will still sleep for a long time(for 
> example, the balancer) before working.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016593#comment-16016593
 ] 

Josh Elser commented on HBASE-17286:


bq. the execution id changed so our code no longer got called

While I can understand why your patch works, I don't think that's the right way 
to fix this. We're relying on whatever the maintainers of the apache pom call 
their execution. Instead, shouldn't we be configuring the goal of our execution 
so that it is invoked regardless? The id is just some descriptive string -- 
it's just happenstance that we choose the same one as the parent pom (which 
happens to be set up with that goal)

Something like..

{code}
   default
+  
+process
+  
{code}

This appears to have done the same thing you set out to do.

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18075) Support namespaces and tables with non-latin alphabetical characters

2017-05-18 Thread Josh Elser (JIRA)
Josh Elser created HBASE-18075:
--

 Summary: Support namespaces and tables with non-latin alphabetical 
characters
 Key: HBASE-18075
 URL: https://issues.apache.org/jira/browse/HBASE-18075
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0


On the heels of HBASE-18067, it would be nice to support namespaces and tables 
with names that fall outside of Latin alphabetical characters and numbers.

Our current regex for allowable characters is approximately {{\[a-zA-Z0-9\]+}}.

It would be nice to replace {{a-zA-Z}} with Java's {{\p\{IsAlphabetic\}}} which 
will naturally restrict the unicode character space down to just those that are 
part of the alphabet for each script (e.g. latin, cyrillic, greek).

Technically, our possible scope of allowable characters is, best as I can tell, 
only limited by the limitations of ZooKeeper itself 
https://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkDataModel
 (as both table and namespace are created as znodes).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016586#comment-16016586
 ] 

stack commented on HBASE-18074:
---

Counting, we take 4-5x the rowlocks that we do in older codebase, 
pre-HBASE-12751, for the same amount of work batch+increment work.

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016570#comment-16016570
 ] 

Josh Elser commented on HBASE-17286:


Awesome! Thanks for digging into this [~mdrob] :)

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-17286:
--
Attachment: HBASE-17286.patch

This broke when moving from apache parent pom 12 to 18 (HBASE-16335) because 
the execution id changed so our code no longer got called. I think this needs 
to be applied to both master and branch-1 as a result.

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-17286:
--
Status: Patch Available  (was: Open)

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17286.patch
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2017-05-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned HBASE-17286:
-

Assignee: Mike Drob

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16908) investigate flakey TestQuotaThrottle

2017-05-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016551#comment-16016551
 ] 

huaxiang sun commented on HBASE-16908:
--

I was looking at the failure, it complained about number of requests exceeded 
which number of requests is not configured for this test case. I am suspecting 
that quotas are not cleaned up from previous testing cases. I will look at this 
more.

2017-05-18 20:10:55,610 WARN  [hconnection-0x507d20bb-shared-pool13-t98] 
client.AsyncRequestFutureImpl(796): #34, table=TestQuotaAdmin0, attempt=1/7 
failed=1ops, last exception: 
org.apache.hadoop.hbase.quotas.ThrottlingException: 
org.apache.hadoop.hbase.quotas.ThrottlingException: number of requests exceeded 
- wait 10.00sec
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwThrottlingException(ThrottlingException.java:124)
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwNumRequestsExceeded(ThrottlingException.java:93)
at 
org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:115)
at 
org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:72)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:190)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:162)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2488)
at 

> investigate flakey TestQuotaThrottle 
> -
>
> Key: HBASE-16908
> URL: https://issues.apache.org/jira/browse/HBASE-16908
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> find out the root cause for TestQuotaThrottle failures.
>  
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-16908) investigate flakey TestQuotaThrottle

2017-05-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016551#comment-16016551
 ] 

huaxiang sun edited comment on HBASE-16908 at 5/18/17 10:13 PM:


I was looking at the failure, it complained about number of requests exceeded 
which number of requests is not configured for this test case. I am suspecting 
that quotas are not cleaned up from previous testing cases. I will look at this 
more.

{code}
2017-05-18 20:10:55,610 WARN  [hconnection-0x507d20bb-shared-pool13-t98] 
client.AsyncRequestFutureImpl(796): #34, table=TestQuotaAdmin0, attempt=1/7 
failed=1ops, last exception: 
org.apache.hadoop.hbase.quotas.ThrottlingException: 
org.apache.hadoop.hbase.quotas.ThrottlingException: number of requests exceeded 
- wait 10.00sec
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwThrottlingException(ThrottlingException.java:124)
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwNumRequestsExceeded(ThrottlingException.java:93)
at 
org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:115)
at 
org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:72)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:190)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:162)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2488)
at 
{code}


was (Author: huaxiang):
I was looking at the failure, it complained about number of requests exceeded 
which number of requests is not configured for this test case. I am suspecting 
that quotas are not cleaned up from previous testing cases. I will look at this 
more.

2017-05-18 20:10:55,610 WARN  [hconnection-0x507d20bb-shared-pool13-t98] 
client.AsyncRequestFutureImpl(796): #34, table=TestQuotaAdmin0, attempt=1/7 
failed=1ops, last exception: 
org.apache.hadoop.hbase.quotas.ThrottlingException: 
org.apache.hadoop.hbase.quotas.ThrottlingException: number of requests exceeded 
- wait 10.00sec
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwThrottlingException(ThrottlingException.java:124)
at 
org.apache.hadoop.hbase.quotas.ThrottlingException.throwNumRequestsExceeded(ThrottlingException.java:93)
at 
org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:115)
at 
org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:72)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:190)
at 
org.apache.hadoop.hbase.quotas.RegionServerQuotaManager.checkQuota(RegionServerQuotaManager.java:162)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2488)
at 

> investigate flakey TestQuotaThrottle 
> -
>
> Key: HBASE-16908
> URL: https://issues.apache.org/jira/browse/HBASE-16908
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
>
> find out the root cause for TestQuotaThrottle failures.
>  
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18074.
---
Resolution: Invalid

Resolving as invalid. Misreading on my part and we should be aggregating back 
on the RowLockContext if a batch is made up of many mutations all of the same 
row -- they should all get the same lock instance.

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016470#comment-16016470
 ] 

stack commented on HBASE-18074:
---

[~enis] Thanks for taking a look.

bq. We always acquired locks for every mutation in the batch, no?

Hmm. Yes. I see now that in pre-HBASE-12751 code, if we passed !waitForLock, it 
looks like it was a signal that owner had shifted from current thread and so 
time to submit a batch (?).

Thanks.

Bottom line is that for the loading I'm looking at -- a mix of batch and 
increments -- often colliding on the same row, locking is holding up handlers 
post-HBASE-12751. I spent time compare times spent acquiring locks both pre-and 
post-HBASE-12751 and it seems to be a wash but I must be measuring wrong. I 
arrived here because was thinking we were doing less locking pre-HBASE-12751.

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18070) Enable memstore replication for meta replica

2017-05-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016462#comment-16016462
 ] 

huaxiang sun commented on HBASE-18070:
--

Thanks [~enis]! Will look into code as you suggested.

> Enable memstore replication for meta replica
> 
>
> Key: HBASE-18070
> URL: https://issues.apache.org/jira/browse/HBASE-18070
> Project: HBase
>  Issue Type: New Feature
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Based on the current doc, memstore replication is not enabled for meta 
> replica. Memstore replication will be a good improvement for meta replica. 
> Create jira to track this effort (feasibility, design, implementation, etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18035) Meta replica does not give any primaryOperationTimeout to primary meta region

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016396#comment-16016396
 ] 

Hadoop QA commented on HBASE-18035:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 25s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 107m 8s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
53s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 0s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868800/HBASE-18035-master-v002.patch
 |
| JIRA Issue | HBASE-18035 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux b78954b8d134 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6dc4190c |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6833/testReport/ |
| modules | C: hbase-common hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6833/console 

[jira] [Commented] (HBASE-18068) Fix flaky test TestAsyncSnapshotAdminApi

2017-05-18 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016298#comment-16016298
 ] 

Appy commented on HBASE-18068:
--

[~openinx], don't have a patch yet. If you want it, feel free to take it! 
Otherwise i'll try a patch later today/tom.

> Fix flaky test TestAsyncSnapshotAdminApi
> 
>
> Key: HBASE-18068
> URL: https://issues.apache.org/jira/browse/HBASE-18068
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
>
> Test failures
> {noformat}
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testRestoreSnapshot
> 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: Restore already in 
> progress on the table=testRestoreSnapshot
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:854)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:818)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreOrCloneSnapshot(SnapshotManager.java:780)
>  at org.apache.hadoop.hbase.master.HMaster$14.run(HMaster.java:2324)
>  at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:133)
>  at org.apache.hadoop.hbase.master.HMaster.restoreSnapshot(HMaster.java:2320)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.restoreSnapshot(MasterRpcServices.java:1224)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testDeleteSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: 
> org.apache.hadoop.hbase.snapshot.SnapshotCreationException: Rejected taking { 
> ss=snapshotName1 table=testDeleteSnapshots type=FLUSH } because we are 
> already running another snapshot on the same table { ss=snapshotName1 
> table=testDeleteSnapshots type=FLUSH }
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.prepareToTakeSnapshot(SnapshotManager.java:440)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.snapshotEnabledTable(SnapshotManager.java:497)
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:598)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.snapshot(MasterRpcServices.java:1299)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> org.apache.hadoop.hbase.client.TestAsyncSnapshotAdminApi.testListSnapshots
> 
> org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException: Snapshot 
> 'snapshotName2' doesn't exist on the filesystem
>  at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:289)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.deleteSnapshot(MasterRpcServices.java:461)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:257)
> 
> {noformat}
> https://builds.apache.org/job/HBASE-Flaky-Tests/16152/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18067) Support a default converter for data read shell commands

2017-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016280#comment-16016280
 ] 

Josh Elser commented on HBASE-18067:


A quick summary of what this changes:

{noformat}
text = '⻆⻇'
"\342\273\206\342\273\207"
row = '⻄'
"\342\273\204"
family = 'ㄹ'
"\343\204\271"
qualifier = 'â»…'
"\342\273\205"

create 'foo', 'f1'
Created table foo
Took 0.2350 seconds
put 'foo', 'r1', 'f1:a', text
Took 0.1200 seconds
scan 'foo'
ROW   COLUMN+CELL
 r1   column=f1:a, 
timestamp=1495134339066, value=\xE2\xBB\x86\xE2\xBB\x87
1 row(s)
Took 0.0160 seconds
scan 'foo', {FORMATTER=>'toString'}
ROW   COLUMN+CELL
 r1   column=f1:a, 
timestamp=1495134339066, value=⻆⻇
1 row(s)
Took 0.0060 seconds
get 'foo', 'r1'
COLUMNCELL
 f1:a timestamp=1495134339066, 
value=\xE2\xBB\x86\xE2\xBB\x87
1 row(s)
Took 0.0050 seconds
get 'foo', 'r1', {FORMATTER=>'toString'}
COLUMNCELL
 f1:a timestamp=1495134339066, value=⻆⻇
1 row(s)
Took 0.0030 seconds

create 'bar', family
Created table bar
Took 0.4210 seconds
put 'bar', row, "#{family}:#{qualifier}", text
Took 0.0080 seconds
scan 'bar'
ROW   COLUMN+CELL
 \xE2\xBB\x84 column=\xE3\x84\xB9:\xE2\xBB\x85, 
timestamp=1495134339575, value=\xE2\xBB\x86\xE2\xBB\x87
1 row(s)
Took 0.0030 seconds
scan 'bar', {FORMATTER=>'toString'}
ROW   COLUMN+CELL
 ⻄  column=ㄹ:⻅, 
timestamp=1495134339575, value=⻆⻇
1 row(s)
Took 0.0050 seconds
get 'bar', row
COLUMNCELL
 \xE3\x84\xB9:\xE2\xBB\x85timestamp=1495134339575, 
value=\xE2\xBB\x86\xE2\xBB\x87
1 row(s)
Took 0.0050 seconds
get 'bar', row, {FORMATTER=>'toString'}
COLUMNCELL
 ㄹ:⻅  timestamp=1495134339575, value=⻆⻇
1 row(s)
Took 0.0050 seconds
{noformat}

If anyone was wondering, I just picked some characters off a utf-8 character 
list :)

> Support a default converter for data read shell commands
> 
>
> Key: HBASE-18067
> URL: https://issues.apache.org/jira/browse/HBASE-18067
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18067.001.patch
>
>
> The {{get}} and {{scan}} shell commands have the ability to specify some 
> complicated syntax on how to encode the bytes read from HBase on a per-column 
> basis. By default, bytes falling outside of a limited range of ASCII are just 
> printed as hex.
> It seems like the intent of these converts was to support conversion of 
> certain numeric columns as a readable string (e.g. 1234).
> However, if non-ascii encoded bytes are stored in the table (e.g. UTF-8 
> encoded bytes), we may want to treat all data we read as UTF-8 instead (e.g. 
> if row+column+value are in Chinese). It would be onerous to require users to 
> enumerate every column they're reading to parse as UTF-8 instead of the 
> limited ascii range. We can provide an option to encode all values retrieved 
> by the command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18070) Enable memstore replication for meta replica

2017-05-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016224#comment-16016224
 ] 

Enis Soztutar commented on HBASE-18070:
---

In replication, updates to system tables are filtered out before being given to 
the replication endpoints. Since "region replica replication" is just a 
replication endpoint, at this point it is not even receiving the edits for 
meta. Changing this should not be that hard, the only thing is that the 
replication source has to be careful to still filter out these edits for 
regular replication endpoints.

> Enable memstore replication for meta replica
> 
>
> Key: HBASE-18070
> URL: https://issues.apache.org/jira/browse/HBASE-18070
> Project: HBase
>  Issue Type: New Feature
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Based on the current doc, memstore replication is not enabled for meta 
> replica. Memstore replication will be a good improvement for meta replica. 
> Create jira to track this effort (feasibility, design, implementation, etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016220#comment-16016220
 ] 

Enis Soztutar commented on HBASE-18074:
---

We always acquired locks for every mutation in the batch, no? There is no 
guarantee that the rows are the same in the batch. 

> HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation 
> rather than one per batch
> 
>
> Key: HBASE-18074
> URL: https://issues.apache.org/jira/browse/HBASE-18074
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>
> HBASE-12751 did this:
> {code}
> ...
>  // If we haven't got any rows in our batch, we should block to
>  // get the next one.
> -boolean shouldBlock = numReadyToWrite == 0;
>  RowLock rowLock = null;
>  try {
> -  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
> +  rowLock = getRowLock(mutation.getRow(), true);
>  } catch (IOException ioe) {
>LOG.warn("Failed getting lock in batch put, row="
>  + Bytes.toStringBinary(mutation.getRow()), ioe);
>  }
>  if (rowLock == null) {
>// We failed to grab another lock
> ..
> {code}
> In old codebase, getRowLock with a true meant do not wait on row lock. In the 
> HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
> mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
> the amount of locks.
> I'm in here because interesting case where increments and batch going into 
> same row seem to backup and stall trying to get locks. Looks like this where 
> all handlers are one of either of the below:
> {code}
> "RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
> prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
> [0x7fbb4ca49000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007c6001b38> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
> ...
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon 
> prio=5 os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
> [0x7fbb4d453000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000354976c00> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
> ...
> {code}
> It gets so bad it looks like deadlock but if you give it a while, we move on 
> (I put it down to safe point giving a misleading view on what is happening).
> Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14689) Addendum and unit test for HBASE-13471

2017-05-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016218#comment-16016218
 ] 

Enis Soztutar commented on HBASE-14689:
---

Thanks Stack. Let me take a look. 

> Addendum and unit test for HBASE-13471
> --
>
> Key: HBASE-14689
> URL: https://issues.apache.org/jira/browse/HBASE-14689
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16, 0.98.17
>
> Attachments: hbase-14689-after-revert.patch, 
> hbase-14689-after-revert.patch, hbase-14689_v1-branch-1.1.patch, 
> hbase-14689_v1-branch-1.1.patch, hbase-14689_v1.patch
>
>
> One of our customers ran into HBASE-13471, which resulted in all the handlers 
> getting blocked and various other issues. While backporting the issue, I 
> noticed that there is one more case where we might go into infinite loop. In 
> case a row lock cannot be acquired (due to a previous leak for example which 
> we have seen in Phoenix before) this will cause similar infinite loop. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18074) HBASE-12751 dropped optimization in doMiniBatch; we take lock per mutation rather than one per batch

2017-05-18 Thread stack (JIRA)
stack created HBASE-18074:
-

 Summary: HBASE-12751 dropped optimization in doMiniBatch; we take 
lock per mutation rather than one per batch
 Key: HBASE-18074
 URL: https://issues.apache.org/jira/browse/HBASE-18074
 Project: HBase
  Issue Type: Bug
  Components: Performance
Reporter: stack
Assignee: stack


HBASE-12751 did this:

{code}
...
 // If we haven't got any rows in our batch, we should block to
 // get the next one.
-boolean shouldBlock = numReadyToWrite == 0;
 RowLock rowLock = null;
 try {
-  rowLock = getRowLockInternal(mutation.getRow(), shouldBlock);
+  rowLock = getRowLock(mutation.getRow(), true);
 } catch (IOException ioe) {
   LOG.warn("Failed getting lock in batch put, row="
 + Bytes.toStringBinary(mutation.getRow()), ioe);
 }
 if (rowLock == null) {
   // We failed to grab another lock
..
{code}

In old codebase, getRowLock with a true meant do not wait on row lock. In the 
HBASE-12751 codebase, the flag is read/write. So, we get a read lock on every 
mutation in the batch. If ten mutations in a batch on average, then we'll 10x 
the amount of locks.

I'm in here because interesting case where increments and batch going into same 
row seem to backup and stall trying to get locks. Looks like this where all 
handlers are one of either of the below:

{code}
"RpcServer.FifoWFPBQ.default.handler=190,queue=10,port=60020" #243 daemon 
prio=5 os_prio=0 tid=0x7fbb58691800 nid=0x2d2527 waiting on condition 
[0x7fbb4ca49000]
   java.lang.Thread.State: TIMED_WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x0007c6001b38> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247)
  at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
  at org.apache.hadoop.hbase.regionserver.HRegion.doIncrement(HRegion.java:7453)
...
{code}

{code}
"RpcServer.FifoWFPBQ.default.handler=180,queue=0,port=60020" #233 daemon prio=5 
os_prio=0 tid=0x7fbb586ed800 nid=0x2d251d waiting on condition 
[0x7fbb4d453000]
   java.lang.Thread.State: TIMED_WAITING (parking)
  at sun.misc.Unsafe.park(Native Method)
  - parking to wait for  <0x000354976c00> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
  at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5171)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3017)
...
{code}

It gets so bad it looks like deadlock but if you give it a while, we move on (I 
put it down to safe point giving a misleading view on what is happening).

Let me put back the optimization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14689) Addendum and unit test for HBASE-13471

2017-05-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016213#comment-16016213
 ] 

stack commented on HBASE-14689:
---

I created HBASE-18074 to take up the mess I dumped here on the end of this 
issue.

> Addendum and unit test for HBASE-13471
> --
>
> Key: HBASE-14689
> URL: https://issues.apache.org/jira/browse/HBASE-14689
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16, 0.98.17
>
> Attachments: hbase-14689-after-revert.patch, 
> hbase-14689-after-revert.patch, hbase-14689_v1-branch-1.1.patch, 
> hbase-14689_v1-branch-1.1.patch, hbase-14689_v1.patch
>
>
> One of our customers ran into HBASE-13471, which resulted in all the handlers 
> getting blocked and various other issues. While backporting the issue, I 
> noticed that there is one more case where we might go into infinite loop. In 
> case a row lock cannot be acquired (due to a previous leak for example which 
> we have seen in Phoenix before) this will cause similar infinite loop. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18073) ScheduledChore with delay longer than period never runs

2017-05-18 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18073:
---
Priority: Minor  (was: Major)

> ScheduledChore with delay longer than period never runs
> ---
>
> Key: HBASE-18073
> URL: https://issues.apache.org/jira/browse/HBASE-18073
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
>
> (Obligatory: saw this on a fork -- need to confirm if this affects the Apache 
> branches, and which branches if so)
> If a ScheduledChore is configured with a delay that is longer than the 
> period, the Chore never actually gets run, instead repeatedly complains that 
> the Chore missed its start time.
> {noformat}
> 2017-05-18 17:17:06,606 TRACE [server.com,16020,1495125783052_ChoreService_1] 
> hbase.ChoreService: onChoreMissedStartTime
> 2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
> hbase.ChoreService: Chore name: FileSystemUtilizationChore
> 2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
> hbase.ChoreService: Chore period: 3
> 2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
> hbase.ChoreService: Chore timeBetweenRuns: 6
> 2017-05-18 17:17:06,612 INFO  [server.com,16020,1495125783052_ChoreService_1] 
> quotas.FileSystemUtilizationChore: Chore: FileSystemUtilizationChore missed 
> its start time
> {noformat}
> It seems like this might be an edge-case for the first invocation of the 
> chore. Need to read the code closer.
> The workaround is to just ensure that the delay is always less than the 
> period.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18073) ScheduledChore with delay longer than period never runs

2017-05-18 Thread Josh Elser (JIRA)
Josh Elser created HBASE-18073:
--

 Summary: ScheduledChore with delay longer than period never runs
 Key: HBASE-18073
 URL: https://issues.apache.org/jira/browse/HBASE-18073
 Project: HBase
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser


(Obligatory: saw this on a fork -- need to confirm if this affects the Apache 
branches, and which branches if so)

If a ScheduledChore is configured with a delay that is longer than the period, 
the Chore never actually gets run, instead repeatedly complains that the Chore 
missed its start time.

{noformat}
2017-05-18 17:17:06,606 TRACE [server.com,16020,1495125783052_ChoreService_1] 
hbase.ChoreService: onChoreMissedStartTime
2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
hbase.ChoreService: Chore name: FileSystemUtilizationChore
2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
hbase.ChoreService: Chore period: 3
2017-05-18 17:17:06,612 TRACE [server.com,16020,1495125783052_ChoreService_1] 
hbase.ChoreService: Chore timeBetweenRuns: 6
2017-05-18 17:17:06,612 INFO  [server.com,16020,1495125783052_ChoreService_1] 
quotas.FileSystemUtilizationChore: Chore: FileSystemUtilizationChore missed its 
start time
{noformat}

It seems like this might be an edge-case for the first invocation of the chore. 
Need to read the code closer.

The workaround is to just ensure that the delay is always less than the period.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18067) Support a default converter for data read shell commands

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016172#comment-16016172
 ] 

Hadoop QA commented on HBASE-18067:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 3s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
5s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868797/HBASE-18067.001.patch 
|
| JIRA Issue | HBASE-18067 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux a780c56ed6c6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6dc4190c |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6832/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6832/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Support a default converter for data read shell commands
> 
>
> Key: HBASE-18067
> URL: https://issues.apache.org/jira/browse/HBASE-18067
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18067.001.patch
>
>
> The {{get}} and {{scan}} shell commands have the ability to specify some 
> complicated syntax on how to encode the bytes read from HBase on a per-column 
> basis. By default, bytes falling outside of a limited range of ASCII are just 
> printed as hex.
> It seems like the intent of these converts was to support conversion of 
> certain numeric columns as a readable string (e.g. 1234).
> However, if non-ascii encoded bytes are stored in the table (e.g. UTF-8 
> encoded bytes), we may want to treat all data we read as UTF-8 instead (e.g. 
> if row+column+value are in Chinese). It would be onerous to require users to 
> enumerate every column they're reading to parse as UTF-8 instead of the 
> limited ascii range. We can provide an option to encode all values retrieved 
> by the command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

[jira] [Created] (HBASE-18072) Malformed Cell from client causes Regionserver abort on flush

2017-05-18 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-18072:
-

 Summary: Malformed Cell from client causes Regionserver abort on 
flush
 Key: HBASE-18072
 URL: https://issues.apache.org/jira/browse/HBASE-18072
 Project: HBase
  Issue Type: Bug
  Components: regionserver, rpc
Affects Versions: 1.3.0
Reporter: Gary Helmling
Assignee: Gary Helmling
Priority: Critical


When a client writes a mutation with a Cell with a corrupted value length 
field, it is possible for the corrupt cell to trigger an exception on memstore 
flush, which will trigger regionserver aborts until the region is manually 
recovered.

This boils down to a lack of validation on the client submitted byte[] backing 
the cell.

Consider the following sequence:

1. Client creates a new Put with a cell with value of byte[16]
2. When the backing KeyValue for the Put is created, we serialize 16 for the 
value length field in the backing array
3. Client calls Table.put()
4. RpcClientImpl calls KeyValueEncoder.encode() to serialize the Cell to the 
OutputStream
5. Memory corruption in the backing array changes the serialized contents of 
the value length field from 16 to 48
6. Regionserver handling the put uses KeyValueDecoder.decode() to create a 
KeyValue with the byte[] read directly off the InputStream.  The overall length 
of the array is correct, but the integer value serialized at the value length 
offset has been corrupted from the original value of 16 to 48.
7. The corrupt KeyValue is appended to the WAL and added to the memstore
8. After some time, the memstore flushes.  As HFileWriter is writing out the 
corrupted cell, it reads the serialized int from the value length position in 
the cell's byte[] to determine the number of bytes to write for the value.  
Because value offset + 48 is greater than the length of the cell's byte[], we 
hit an IndexOutOfBoundsException:
{noformat}
java.lang.IndexOutOfBoundsException
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:151)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.hbase.io.hfile.NoOpDataBlockEncoder.encode(NoOpDataBlockEncoder.java:56)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.write(HFileBlock.java:954)
at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:284)
at 
org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
at 
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1041)
at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:138)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:937)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2413)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2456)
{noformat}
9. Regionserver aborts due to the failed flush
10. The regionserver WAL is split into recovered.edits files, one of these 
containing the same corrupted cell
11. A new regionserver is assigned the region with the corrupted write
12. The new regionserver replays the recovered.edits entries into memstore and 
then tries to flush the memstore to an HFile
13. The flush triggers the same IndexOutOfBoundsException, causing us to go 
back to step #8 and loop on repeat until manual intervention is taken

The corrupted cell basically becomes a poison pill that aborts regionservers 
one at a time as the region with the problem edit is passed around.  This also 
means that a malicious client could easily construct requests allowing a denial 
of service attack against regionservers hosting any tables that the client has 
write access to.

At bare minimum, I think we need to do a sanity check on all the lengths for 
Cells read off the CellScanner for incoming requests.  This would allow us to 
reject corrupt cells before we append them to the WAL and succeed the request, 
putting us in a position where we cannot recover.  This would only detect the 
corruption of length fields which puts us in a bad state.

Whether or not Cells should carry some checksum generated at the time the Cell 
is created, which could then validated on the server-side, is a separate 
question.  This would allow detection of other parts of the backing cell 
byte[], such as within the key fields or the value field.  But the computer 
overhead of this may be too heavyweight to be practical.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18071) Fix flaky test TestStochasticLoadBalancer#testBalanceCluster

2017-05-18 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016157#comment-16016157
 ] 

Umesh Agashe commented on HBASE-18071:
--

I am testing the patch. Will submit it soon.

> Fix flaky test TestStochasticLoadBalancer#testBalanceCluster
> 
>
> Key: HBASE-18071
> URL: https://issues.apache.org/jira/browse/HBASE-18071
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>
> Fix flaky test TestStochasticLoadBalancer#testBalanceCluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (HBASE-18071) Fix flaky test TestStochasticLoadBalancer#testBalanceCluster

2017-05-18 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-18071 started by Umesh Agashe.

> Fix flaky test TestStochasticLoadBalancer#testBalanceCluster
> 
>
> Key: HBASE-18071
> URL: https://issues.apache.org/jira/browse/HBASE-18071
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>
> Fix flaky test TestStochasticLoadBalancer#testBalanceCluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18071) Fix flaky test TestStochasticLoadBalancer#testBalanceCluster

2017-05-18 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-18071:


 Summary: Fix flaky test 
TestStochasticLoadBalancer#testBalanceCluster
 Key: HBASE-18071
 URL: https://issues.apache.org/jira/browse/HBASE-18071
 Project: HBase
  Issue Type: Bug
  Components: Balancer
Reporter: Umesh Agashe
Assignee: Umesh Agashe


Fix flaky test TestStochasticLoadBalancer#testBalanceCluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18070) Enable memstore replication for meta replica

2017-05-18 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-18070:


 Summary: Enable memstore replication for meta replica
 Key: HBASE-18070
 URL: https://issues.apache.org/jira/browse/HBASE-18070
 Project: HBase
  Issue Type: New Feature
Reporter: huaxiang sun
Assignee: huaxiang sun


Based on the current doc, memstore replication is not enabled for meta replica. 
Memstore replication will be a good improvement for meta replica. Create jira 
to track this effort (feasibility, design, implementation, etc).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18005) read replica: handle the case that region server hosting both primary replica and meta region is down

2017-05-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016127#comment-16016127
 ] 

huaxiang sun commented on HBASE-18005:
--

Ping for review

> read replica: handle the case that region server hosting both primary replica 
> and meta region is down
> -
>
> Key: HBASE-18005
> URL: https://issues.apache.org/jira/browse/HBASE-18005
> Project: HBase
>  Issue Type: Bug
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18005-master-001.patch, 
> HBASE-18005-master-002.patch, HBASE-18005-master-003.patch
>
>
> Identified one corner case in testing  that when the region server hosting 
> both primary replica and the meta region is down, the client tries to reload 
> the primary replica location from meta table, it is supposed to clean up only 
> the cached location for specific replicaId, but it clears caches for all 
> replicas. Please see
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java#L813
> Since it takes some time for regions to be reassigned (including meta 
> region), the following may throw exception
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L173
> This exception needs to be caught and  it needs to get cached location (in 
> this case, the primary replica's location is not available). If there are 
> cached locations for other replicas, it can still go ahead to get stale 
> values from secondary replicas.
> With meta replica, it still helps to not clean up the caches for all replicas 
> as the info from primary meta replica is up-to-date.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18035) Meta replica does not give any primaryOperationTimeout to primary meta region

2017-05-18 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18035:
-
Attachment: HBASE-18035-master-v002.patch

Upload v2 patch which addressed Ted's comments + simplied the unittest.

> Meta replica does not give any primaryOperationTimeout to primary meta region
> -
>
> Key: HBASE-18035
> URL: https://issues.apache.org/jira/browse/HBASE-18035
> Project: HBase
>  Issue Type: Bug
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Attachments: 18035-unittest.patch, HBASE-18035-master-v001.patch, 
> HBASE-18035-master-v001.patch, HBASE-18035-master-v002.patch
>
>
> I was working on my unittest and it failed with TableNotFoundException. I 
> debugged a bit and found out that for meta scan, it does not give any 
> primaryOperationTimeout to primary meta region. This will be an issue as the 
> meta replica will contain stale data and it is possible that the meta replica 
> will return back first than primary.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java#L823



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18067) Support a default converter for data read shell commands

2017-05-18 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18067:
---
Status: Patch Available  (was: Open)

> Support a default converter for data read shell commands
> 
>
> Key: HBASE-18067
> URL: https://issues.apache.org/jira/browse/HBASE-18067
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18067.001.patch
>
>
> The {{get}} and {{scan}} shell commands have the ability to specify some 
> complicated syntax on how to encode the bytes read from HBase on a per-column 
> basis. By default, bytes falling outside of a limited range of ASCII are just 
> printed as hex.
> It seems like the intent of these converts was to support conversion of 
> certain numeric columns as a readable string (e.g. 1234).
> However, if non-ascii encoded bytes are stored in the table (e.g. UTF-8 
> encoded bytes), we may want to treat all data we read as UTF-8 instead (e.g. 
> if row+column+value are in Chinese). It would be onerous to require users to 
> enumerate every column they're reading to parse as UTF-8 instead of the 
> limited ascii range. We can provide an option to encode all values retrieved 
> by the command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18067) Support a default converter for data read shell commands

2017-05-18 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18067:
---
Attachment: HBASE-18067.001.patch

.001 Adds a {{FORMATTER}} and {{FORMATTER_CLASS}} option to {{scan}} and 
{{get}}. This will provide the default encoding for bytes -> shell printout. 
Users can still override this default with "per-column" formatter configuration.

The default output is still the limited ascii with hex-encoding (e.g. 
{{Bytes.toStringBinary}}).

> Support a default converter for data read shell commands
> 
>
> Key: HBASE-18067
> URL: https://issues.apache.org/jira/browse/HBASE-18067
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18067.001.patch
>
>
> The {{get}} and {{scan}} shell commands have the ability to specify some 
> complicated syntax on how to encode the bytes read from HBase on a per-column 
> basis. By default, bytes falling outside of a limited range of ASCII are just 
> printed as hex.
> It seems like the intent of these converts was to support conversion of 
> certain numeric columns as a readable string (e.g. 1234).
> However, if non-ascii encoded bytes are stored in the table (e.g. UTF-8 
> encoded bytes), we may want to treat all data we read as UTF-8 instead (e.g. 
> if row+column+value are in Chinese). It would be onerous to require users to 
> enumerate every column they're reading to parse as UTF-8 instead of the 
> limited ascii range. We can provide an option to encode all values retrieved 
> by the command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore

2017-05-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016098#comment-16016098
 ] 

ramkrishna.s.vasudevan commented on HBASE-18010:


bq.By the way why integer is added there, near getSerializedSize?
This API is mainly used in the RPC layer serialization. So every cell's lengh 
is added as an integer as a header before the actual cell.

> Connect CellChunkMap to be used for flattening in CompactingMemStore
> 
>
> Key: HBASE-18010
> URL: https://issues.apache.org/jira/browse/HBASE-18010
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> The CellChunkMap helps to create a new type of ImmutableSegment, where the 
> index (CellSet's delegatee) is going to be CellChunkMap. No big cells or 
> upserted cells are going to be supported here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18069) Fix flaky test TestReplicationAdminWithClusters#testDisableAndEnableReplication

2017-05-18 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18069:
---
Labels: beginner  (was: )

> Fix flaky test 
> TestReplicationAdminWithClusters#testDisableAndEnableReplication
> ---
>
> Key: HBASE-18069
> URL: https://issues.apache.org/jira/browse/HBASE-18069
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Chia-Ping Tsai
>Priority: Trivial
>  Labels: beginner
> Fix For: 2.0.0, 1.4.0
>
>
> If we run testDisableAndEnableReplication, we will get the following error 
> message.
> {code}
> testDisableAndEnableReplication(org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters)
>   Time elapsed: 2.046 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters.testDisableAndEnableReplication(TestReplicationAdminWithClusters.java:160)
> {code}
> The critical code is shown below.
> {code}
> admin1.disableTableReplication(tableName);
> HTableDescriptor table = admin1.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> table = admin2.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> {code}
> Is HTD got from admin2 affected by admin1? I don't think so. We should remove 
> the related assertion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18049) It is not necessary to re-open the region when MOB files cannot be found

2017-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015969#comment-16015969
 ] 

Hudson commented on HBASE-18049:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3033 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3033/])
HBASE-18049 It is not necessary to re-open the region when MOB files 
(jingchengdu: rev 6dc4190c07a6e3039f6c32bdc9a8aeb5483ea192)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HMobStore.java


> It is not necessary to re-open the region when MOB files cannot be found
> 
>
> Key: HBASE-18049
> URL: https://issues.apache.org/jira/browse/HBASE-18049
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Fix For: 2.0.0
>
> Attachments: HBASE-18049.patch, HBASE-18049-V2.patch, 
> HBASE-18049-V3.patch
>
>
> In HBASE-17712, we try to re-open the region when store files cannot be 
> found. This is useful for store files in a region, but is not necessary when 
> the MOB files cannot be found, because the store files in a region only 
> contain the references to the MOB files and a re-open of a region doesn't 
> help the lost MOB files.
> In this JIRA, we will directly throw DNRIOE only when the MOB files are not 
> found in {{MobStoreScanner}} and {{ReversedMobStoreScanner}}. Other logics 
> keep the same.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions

2017-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015968#comment-16015968
 ] 

Hudson commented on HBASE-11013:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3033 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3033/])
HBASE-11013: Clone Snapshots on Secure Cluster Should provide option to (zghao: 
rev 37dd8ff722fa762d9ef86488dea90e5470672e67)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CloneSnapshotProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* (edit) hbase-shell/src/main/ruby/hbase_constants.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java
* (edit) hbase-protocol-shaded/src/main/protobuf/HBase.proto
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotWithAcl.java
* (edit) hbase-server/src/main/resources/hbase-webapps/master/snapshotsStats.jsp
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotDescriptionUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/SnapshotProtos.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Master.proto
* (edit) hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestWithDisabledAuthorization.java
* (add) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/ShadedAccessControlUtil.java
* (edit) hbase-shell/src/main/ruby/shell/commands/clone_snapshot.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/AdminProtos.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Snapshot.proto
* (add) hbase-protocol-shaded/src/main/protobuf/AccessControl.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/EnabledTableSnapshotHandler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/SnapshotSentinel.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/backup/util/RestoreTool.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/RegionServerSnapshotManager.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestRestoreSnapshotProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (edit) hbase-shell/src/main/ruby/shell/commands/restore_snapshot.rb
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/HBaseProtos.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotManifest.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotClientRetries.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/FlushSnapshotSubprocedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (edit) 

[jira] [Updated] (HBASE-18069) Fix flaky test TestReplicationAdminWithClusters#testDisableAndEnableReplication

2017-05-18 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18069:
---
Component/s: test

> Fix flaky test 
> TestReplicationAdminWithClusters#testDisableAndEnableReplication
> ---
>
> Key: HBASE-18069
> URL: https://issues.apache.org/jira/browse/HBASE-18069
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
>
> If we run testDisableAndEnableReplication, we will get the following error 
> message.
> {code}
> testDisableAndEnableReplication(org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters)
>   Time elapsed: 2.046 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters.testDisableAndEnableReplication(TestReplicationAdminWithClusters.java:160)
> {code}
> The critical code is shown below.
> {code}
> admin1.disableTableReplication(tableName);
> HTableDescriptor table = admin1.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> table = admin2.getTableDescriptor(tableName);
> for (HColumnDescriptor fam : table.getColumnFamilies()) {
>   assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
> }
> {code}
> Is HTD got from admin2 affected by admin1? I don't think so. We should remove 
> the related assertion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18069) Fix flaky test TestReplicationAdminWithClusters#testDisableAndEnableReplication

2017-05-18 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-18069:
--

 Summary: Fix flaky test 
TestReplicationAdminWithClusters#testDisableAndEnableReplication
 Key: HBASE-18069
 URL: https://issues.apache.org/jira/browse/HBASE-18069
 Project: HBase
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Priority: Trivial
 Fix For: 2.0.0, 1.4.0


If we run testDisableAndEnableReplication, we will get the following error 
message.
{code}
testDisableAndEnableReplication(org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters)
  Time elapsed: 2.046 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.hbase.client.replication.TestReplicationAdminWithClusters.testDisableAndEnableReplication(TestReplicationAdminWithClusters.java:160)
{code}
The critical code is shown below.
{code}
admin1.disableTableReplication(tableName);
HTableDescriptor table = admin1.getTableDescriptor(tableName);
for (HColumnDescriptor fam : table.getColumnFamilies()) {
  assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
}
table = admin2.getTableDescriptor(tableName);
for (HColumnDescriptor fam : table.getColumnFamilies()) {
  assertEquals(fam.getScope(), HConstants.REPLICATION_SCOPE_LOCAL);
}
{code}
Is HTD got from admin2 affected by admin1? I don't think so. We should remove 
the related assertion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore

2017-05-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015948#comment-16015948
 ] 

Anoop Sam John commented on HBASE-18010:


{code}
protected long heapSizeChange(Cell cell, boolean succ) {
if (succ) {
  return ClassSize
  .align(ClassSize.CONCURRENT_SKIPLISTMAP_ENTRY + 
CellUtil.estimatedHeapSizeOf(cell));
}
return 0;
  }
{code}
{code}
public static long estimatedHeapSizeOf(final Cell cell) {
if (cell instanceof HeapSize) {
  return ((HeapSize) cell).heapSize();
}
// TODO: Add sizing of references that hold the row, family, etc., arrays.
return estimatedSerializedSizeOf(cell);
  }
{code}
Every ExtendedCell is a HeapSize impl.  So we will end up calling KV.heapSize() 
only

> Connect CellChunkMap to be used for flattening in CompactingMemStore
> 
>
> Key: HBASE-18010
> URL: https://issues.apache.org/jira/browse/HBASE-18010
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> The CellChunkMap helps to create a new type of ImmutableSegment, where the 
> index (CellSet's delegatee) is going to be CellChunkMap. No big cells or 
> upserted cells are going to be supported here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-05-18 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015878#comment-16015878
 ] 

Dave Latham commented on HBASE-14247:
-

Does the patch address the log cleaner performance concern?

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18058) Zookeeper retry sleep time should have a up limit

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015799#comment-16015799
 ] 

Hadoop QA commented on HBASE-18058:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 55s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 104m 21s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
4s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 3s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868737/HBASE-18058.v2.patch |
| JIRA Issue | HBASE-18058 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  xml  |
| uname | Linux 54d3d24ce69e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6dc4190c |
| Default Java | 

[jira] [Commented] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015768#comment-16015768
 ] 

Hadoop QA commented on HBASE-15616:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 122m 13s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
41s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 176m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868730/HBASE-15616-v5.patch |
| JIRA Issue | HBASE-15616 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 18c56b38fca8 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 37dd8ff |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6830/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6830/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Allow null qualifier for all table operations
> 

[jira] [Commented] (HBASE-15616) Allow null qualifier for all table operations

2017-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015733#comment-16015733
 ] 

Hadoop QA commented on HBASE-15616:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 134m 13s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
37s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 196m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestIncrementsFromClientSide |
|   | hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868718/HBASE-15616-v4.patch |
| JIRA Issue | HBASE-15616 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3c793c8113d0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 37dd8ff |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6829/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  

[jira] [Commented] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore

2017-05-18 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015724#comment-16015724
 ] 

Anastasia Braginsky commented on HBASE-18010:
-

Thank you [~anoop.hbase]! I get what you are saying, but I still need more 
guidance to get to the KeyValue.heapSize(). I am looking on 
estimatedSerializedSizeOf:

{code}
public static int estimatedSerializedSizeOf(final Cell cell) {
if (cell instanceof ExtendedCell) {
  return ((ExtendedCell) cell).getSerializedSize(true) + Bytes.SIZEOF_INT;
}

return getSumOfCellElementLengths(cell) +
  // Use the KeyValue's infrastructure size presuming that another 
implementation would have
  // same basic cost.
  KeyValue.ROW_LENGTH_SIZE + KeyValue.FAMILY_LENGTH_SIZE +
  // Serialization is probably preceded by a length (it is in the 
KeyValueCodec at least).
  Bytes.SIZEOF_INT;
  }
{code}

I assume we are talking about ExtendedCell so looking on getSerializedSize (of 
KeyValue). By the way why integer is added there, near getSerializedSize?

{code}
public int getSerializedSize(boolean withTags) {
if (withTags) {
  return this.length;
}
return this.getKeyLength() + this.getValueLength() + 
KEYVALUE_INFRASTRUCTURE_SIZE;
  }
{code}

So I do not see KeyValue.heapSize() yet...

> Connect CellChunkMap to be used for flattening in CompactingMemStore
> 
>
> Key: HBASE-18010
> URL: https://issues.apache.org/jira/browse/HBASE-18010
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> The CellChunkMap helps to create a new type of ImmutableSegment, where the 
> index (CellSet's delegatee) is going to be CellChunkMap. No big cells or 
> upserted cells are going to be supported here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions

2017-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015722#comment-16015722
 ] 

Hudson commented on HBASE-11013:


FAILURE: Integrated in Jenkins build HBase-1.4 #737 (See 
[https://builds.apache.org/job/HBase-1.4/737/])
HBASE-11013 Clone Snapshots on Secure Cluster Should provide option to (zghao: 
rev f9dc4cad63b1ffcd1f9050b9b8e8d89f44ecd44a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) hbase-protocol/src/main/protobuf/Master.proto
* (edit) 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/SnapshotProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestWithDisabledAuthorization.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java
* (edit) hbase-shell/src/main/ruby/hbase.rb
* (edit) 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/DisabledTableSnapshotHandler.java
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/RestoreSnapshotHandler.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreFlushSnapshotFromClient.java
* (edit) hbase-shell/src/main/ruby/shell/commands/restore_snapshot.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/CloneSnapshotHandler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotExistsException.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java
* (edit) hbase-protocol/src/main/protobuf/HBase.proto
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/CorruptedSnapshotException.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotClientRetries.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/SnapshotSentinel.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java
* (edit) 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java
* (add) 

  1   2   >