[jira] [Created] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Samir Ahmic (JIRA)
Samir Ahmic created HBASE-15034:
---

 Summary: IntegrationTestDDLMasterFailover does not clean created 
namespaces 
 Key: HBASE-15034
 URL: https://issues.apache.org/jira/browse/HBASE-15034
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
Priority: Minor


I was running this test recently and notice that after every run there are new 
namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14511) StoreFile.Writer Meta Plugin

2015-12-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069481#comment-15069481
 ] 

Enis Soztutar commented on HBASE-14511:
---

I was trying to write a StoreFile.Plugin for Phoenix to keep the column stats 
for primary key columns. I have noticed that we are not passing the hregion or 
region info to the store file plugin with the current patch. So there is 
currently no way for the plugin to know about the table that it is operating 
on. 
Thinking more about it, I think it makes more sense to mold this into the 
coprocessor framework for the easiest way forward. We need a way to pass 
context and environment and also have a way for per-table instantiation of 
these plugins (because a Phoenix storefile plugin configured from 
hbase-site.xml should not operate on non-phoenix tables). 

> StoreFile.Writer Meta Plugin
> 
>
> Key: HBASE-14511
> URL: https://issues.apache.org/jira/browse/HBASE-14511
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14511-v3.patch, HBASE-14511-v4.patch, 
> HBASE-14511.v1.patch, HBASE-14511.v2.patch
>
>
> During my work on a new compaction policies (HBASE-14468, HBASE-14477) I had 
> to modify the existing code of a StoreFile.Writer to add additional meta-info 
> required by these new  policies. I think that it should be done by means of a 
> new Plugin framework, because this seems to be a general capability/feature. 
> As a future enhancement this can become a part of a more general 
> StoreFileWriter/Reader plugin architecture. But I need only Meta section of a 
> store file.
> This could be used, for example, to collect rowkeys distribution information 
> during hfile creation. This info can be used later to find the optimal region 
> split key or to create optimal set of sub-regions for M/R jobs or other jobs 
> which can operate on a sub-region level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-15034:

Attachment: HBASE-15035.patch

Here is patch.

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15018:
--
Component/s: IPC/RPC
 Client

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069819#comment-15069819
 ] 

Ashish Singhi commented on HBASE-15018:
---

Thank you.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15018:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 1.2.1)
   1.3.0
   1.2.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.1+ Thanks for working on this [~ashish singhi]

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have 

[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069827#comment-15069827
 ] 

Ted Yu commented on HBASE-6721:
---

+1

Francis:
Please update Release Note.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069795#comment-15069795
 ] 

Hadoop QA commented on HBASE-15034:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779241/HBASE-15035.patch
  against master branch at commit 1af98f255132ef6716a1f6ba1d8d71a36ea38840.
  ATTACHMENT ID: 12779241

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16992//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16992//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16992//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16992//console

This message is automatically generated.

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15031:
--
Release Note: 
Increments can be 10x slower (or more) when there is high concurrency since 
HBase 1.0.0 (HBASE-8763). This feature adds back a fast increment but speed is 
achieved by relaxing row-level consistency for Increments (only). The default 
remains the old, slow, consistent Increment behavior.

Set  "hbase.increment.fast.but.narrow.consistency" to true in hbase-site.xml to 
enable 'fast' increments and then rolling restart your cluster. This is a 
setting the server-side needs to read.

Intermixing fast increment with other Mutations will give indeterminate 
results; e.g. a Put and Increment against the same Cell will not always give 
you the result you expect. Fast Increments are consistent unto themselves. A 
Get with {@link IsolationLevel#READ_UNCOMMITTED} will return the latest 
increment value or an Increment of an amount zero will do the same (beware 
doing Get on a cell that has not been incremented yet -- this will return no 
results).

The difference between fastAndNarrowConsistencyIncrement and 
slowButConsistentIncrement is that the former holds the row lock until the WAL 
sync completes; this allows us to reason that there are no other writers afoot 
when we read the current increment value. In this case we do not need to wait 
on mvcc reads to catch up to writes before we proceed with the read of the 
current Increment value, the root of the slowdown seen in HBASE-14460. The 
fast-path also does not wait on mvcc to complete before returning to the client 
(but the write has been synced and put into memstore before we return). 

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069731#comment-15069731
 ] 

Matteo Bertozzi commented on HBASE-15034:
-

patch looks good, but maybe we should use the same if keepTableAtTheEnd we use 
a couple of lines above for tables. and maybe renaming it to something like 
keepObjectAtEnd. looks like that flag is set if hbck reports inconsistencies

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069811#comment-15069811
 ] 

Ted Yu commented on HBASE-6721:
---

{code}
hbase-server/target/generated-jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.java
   UnusedImportsCheck  20  21
hbase-server/target/generated-jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmplImpl.java
   IndentationCheck61  62
{code}
To my knowledge, HBASE-15015 (committed after the QA run) should have disabled 
the above check.


> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14938) Limit to and fro requests size from ZK in bulk loaded hfile replication

2015-12-23 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14938:
--
Attachment: HBASE-14938.patch

> Limit to and fro requests size from ZK in bulk loaded hfile replication
> ---
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14940:
---
Attachment: HBASE-14940_branch-1.patch

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14938) Limit to and fro requests size from ZK in bulk loaded hfile replication

2015-12-23 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14938:
--
Fix Version/s: 1.3.0
   2.0.0
   Status: Patch Available  (was: Open)

> Limit to and fro requests size from ZK in bulk loaded hfile replication
> ---
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069910#comment-15069910
 ] 

Samir Ahmic commented on HBASE-15034:
-

Good idea, i will add that in next patch.

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-23 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15032:
-
Attachment: HBASE-15032-v002.patch

Break the lines which have more than 100 characters.

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15021) hadoopqa doing false positives

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069870#comment-15069870
 ] 

Hudson commented on HBASE-15021:


FAILURE: Integrated in HBase-1.0 #1126 (See 
[https://builds.apache.org/job/HBase-1.0/1126/])
HBASE-15021 hadoopqa doing false positives (stack: rev 
f02a9fa6a02b0ea98c4d2a183c70016b678e34bd)
* dev-support/zombie-detector.sh
* dev-support/test-patch.sh


> hadoopqa doing false positives
> --
>
> Key: HBASE-15021
> URL: https://issues.apache.org/jira/browse/HBASE-15021
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 15021.patch, 15021.thrownpe.patch, 15021.thrownpe.patch, 
> 15021.thrownpe.patch, 15021.thrownpe.patch
>
>
> https://builds.apache.org/job/PreCommit-HBASE-Build/16930/consoleText says:
> {color:green}+1 core tests{color}.  The patch passed unit tests in .
> ...but here is what happened:
> {code}
> ...
> Results :
> Tests in error: 
> org.apache.hadoop.hbase.regionserver.TestRSStatusServlet.testBasic(org.apache.hadoop.hbase.regionserver.TestRSStatusServlet)
>   Run 1: TestRSStatusServlet.testBasic:105 � NullPointer
>   Run 2: TestRSStatusServlet.testBasic:105 � NullPointer
>   Run 3: TestRSStatusServlet.testBasic:105 � NullPointer
> org.apache.hadoop.hbase.regionserver.TestRSStatusServlet.testWithRegions(org.apache.hadoop.hbase.regionserver.TestRSStatusServlet)
>   Run 1: TestRSStatusServlet.testWithRegions:119 � NullPointer
>   Run 2: TestRSStatusServlet.testWithRegions:119 � NullPointer
>   Run 3: TestRSStatusServlet.testWithRegions:119 � NullPointer
> Tests run: 1033, Failures: 0, Errors: 2, Skipped: 21
> ...
> [INFO] Apache HBase - Server . FAILURE 
> [17:54.559s]
> ...
> {code}
> Why we reporting pass when it failed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Status: Open  (was: Patch Available)

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14717) Enable_table_replication should not create table in peer cluster if specified few tables added in peer

2015-12-23 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14717:
--
Attachment: HBASE-14717(2).patch

Retry again.
Please review.

> Enable_table_replication should not create table in peer cluster if specified 
> few tables added in peer
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-6721) RegionServer Group based Assignment

2015-12-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069872#comment-15069872
 ] 

Elliott Clark edited comment on HBASE-6721 at 12/23/15 5:03 PM:


>From what I can see most of my comments still stand. It's still built into the 
>default client. It's still in the main module. Though I do appreciate that 
>it's a co-processor.

If we can move the co-processor to a different module and move the methods off 
the main admin classes I would be fine with it.


was (Author: eclark):
>From what I can see most of my comments still stand. It's still built into the 
>default client. It's still in the main module.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Status: Patch Available  (was: Open)

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Attachment: HBASE-15027_3.patch

Updated patch correcting the failed test cases. Just noticed them.

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15030) Deadlock in master TableNamespaceManager while running IntegrationTestDDLMasterFailover

2015-12-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15030:

   Resolution: Fixed
Fix Version/s: 1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Deadlock in master TableNamespaceManager while running 
> IntegrationTestDDLMasterFailover
> ---
>
> Key: HBASE-15030
> URL: https://issues.apache.org/jira/browse/HBASE-15030
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15030-v0.patch
>
>
> I was running IntegrationTestDDLMasterFailover on distributed cluster when i 
> notice this. Here is relevant part of master's jstack:
> {code}
> "ProcedureExecutor-1" daemon prio=10 tid=0x7fd2d407f800 nid=0x3332 
> waiting for monitor entry [0x7fd2c2834000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.releaseExclusiveLock(TableNamespaceManager.java:157)
> - waiting to lock <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:216)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:842)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> "ProcedureExecutor-3" daemon prio=10 tid=0x7fd2d41e5800 nid=0x3334 
> waiting on condition [0x7fd2c2632000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:929)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1245)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.acquireExclusiveLock(TableNamespaceManager.java:150)
> - locked <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:210)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:941)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:821)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - None
> Found one Java-level deadlock:
> =
> "ProcedureExecutor-3":
>   waiting for ownable synchronizer 0x00072574b330, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "ProcedureExecutor-1"
> "ProcedureExecutor-1":
>   waiting to lock monitor 0x7fd2cc328908 (object 0x000725c36a48, a 
> org.apache.hadoop.hbase.master.TableNamespaceManager),
>   which is held by "ProcedureExecutor-3"
> Java stack information for the threads listed above:
> ===
> "ProcedureExecutor-3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>  

[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069839#comment-15069839
 ] 

Ted Yu commented on HBASE-6721:
---

Any more review comments ?

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit to and fro requests size from ZK in bulk loaded hfile replication

2015-12-23 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069908#comment-15069908
 ] 

Ashish Singhi commented on HBASE-14938:
---

Regarding Ted's approach, I checked with him offline, his understanding 
regarding this was different.
bq. The rationale is to control the amount of data to be replicated.
Ted, concern is regarding the amount of bulk loaded data we replicate to a peer 
cluster in a single request. IMO that is a topic of discussion on another jira.

Attached patch based on the approach I mentioned in my very first comment.
Please review.

> Limit to and fro requests size from ZK in bulk loaded hfile replication
> ---
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14511) StoreFile.Writer Meta Plugin

2015-12-23 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069938#comment-15069938
 ] 

Vladimir Rodionov commented on HBASE-14511:
---

[~enis], I would prefer not to put this into the coprocessor. Coprocessor API 
is already overcrowded.  If I give you table, region and table-level plugins 
will it suffice? 

> StoreFile.Writer Meta Plugin
> 
>
> Key: HBASE-14511
> URL: https://issues.apache.org/jira/browse/HBASE-14511
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14511-v3.patch, HBASE-14511-v4.patch, 
> HBASE-14511.v1.patch, HBASE-14511.v2.patch
>
>
> During my work on a new compaction policies (HBASE-14468, HBASE-14477) I had 
> to modify the existing code of a StoreFile.Writer to add additional meta-info 
> required by these new  policies. I think that it should be done by means of a 
> new Plugin framework, because this seems to be a general capability/feature. 
> As a future enhancement this can become a part of a more general 
> StoreFileWriter/Reader plugin architecture. But I need only Meta section of a 
> store file.
> This could be used, for example, to collect rowkeys distribution information 
> during hfile creation. This info can be used later to find the optimal region 
> split key or to create optimal set of sub-regions for M/R jobs or other jobs 
> which can operate on a sub-region level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15034:

Affects Version/s: 1.3.0

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-23 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070017#comment-15070017
 ] 

huaxiang sun commented on HBASE-15032:
--

Add test result

hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 'q1', 
>=, 'binary:\x82', true, true)"}
 r4 column=cf1:q1, 
timestamp=1450812398741, value=\x82 
   
1 row(s) in 0.0170 seconds

hbase(main):004:0> 

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069872#comment-15069872
 ] 

Elliott Clark commented on HBASE-6721:
--

>From what I can see most of my comments still stand. It's still built into the 
>default client. It's still in the main module.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069987#comment-15069987
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.3 #463 (See 
[https://builds.apache.org/job/HBase-1.3/463/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev 59cca6297f9fcecec6aaeecb760ae7f27b0d0e29)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070047#comment-15070047
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.2 #468 (See 
[https://builds.apache.org/job/HBase-1.2/468/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev f9000d836d49192fe1305db420f103dcd2b33b76)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>

[jira] [Assigned] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh reassigned HBASE-15035:
--

Assignee: Jonathan Hsieh

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: HBASE-15035.patch

Attached a first patch that fixes the immediate problem for testing by the bot. 
 

I need to do some clean up/refactor of test code before commit.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Status: Patch Available  (was: Open)

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.1.0, 1.0.0, 0.98.0, 2.0.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070339#comment-15070339
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.1-JDK7 #1627 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1627/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev f0206368615d1fa136edfb7c20cb90e8d52b6d02)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070488#comment-15070488
 ] 

Hudson commented on HBASE-15018:


SUCCESS: Integrated in HBase-1.2 #469 (See 
[https://builds.apache.org/job/HBase-1.2/469/])
Revert "HBASE-15018 Inconsistent way of handling TimeoutException in the 
(stack: rev 3ba99074083f1d3d3b189a9603c73aa4713a65d8)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Updated] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14940:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.0.4
   0.98.17
   1.1.3
   1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to 0.98+ versions. Thanks all for the reviews

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070524#comment-15070524
 ] 

Hudson commented on HBASE-15018:


SUCCESS: Integrated in HBase-1.3-IT #402 (See 
[https://builds.apache.org/job/HBase-1.3-IT/402/])
Revert "HBASE-15018 Inconsistent way of handling TimeoutException in the 
(stack: rev fdeca854ec8aa842136fb4f93f81895113e5b70e)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070525#comment-15070525
 ] 

Hudson commented on HBASE-14940:


SUCCESS: Integrated in HBase-1.3-IT #402 (See 
[https://builds.apache.org/job/HBase-1.3-IT/402/])
HBASE-14940 Make our unsafe based ops more safe. (anoopsamjohn: rev 
4a7565af9cf8ef7e40ef3c592d6815d1b671fb5e)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Status: Patch Available  (was: Open)

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070587#comment-15070587
 ] 

ramkrishna.s.vasudevan commented on HBASE-15035:


[~jmhsieh]
Good find. 
One suggestion
Instead of directly setting includeTags as true in LoadIncrementalHFiles - can 
we see if
{code}
halfReader.getHFileReader().getFileContext().isIncludesTags()
{code}
And if that halfReader has tags then include the tags in the new Writer also? 
Though not a problem - you can add a coprocessor and check for the presence of 
tags on the server side itself after Incremental load instead of changing the 
Rpc Codec. But that is also fine. 

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15019) Replication stuck when HDFS is restarted

2015-12-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15019:

Attachment: (was: HBASE-15019-v0_branch-1.2.patch)

> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-15019-v0_branch-1.2.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:508)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:321)
> {noformat}
> the only way to trigger a WAL recovery is to restart and force the master to 
> trigger the lease recovery on WAL split. 
> but there is a case where restarting will not help. If the RS keeps going 
> rolling and flushing the unclosed WAL will be moved in the archive, and at 
> that point the master will never try to do a lease recovery on it. 
> since we know that the RS is still going, should we try to recover the 

[jira] [Commented] (HBASE-15030) Deadlock in master TableNamespaceManager while running IntegrationTestDDLMasterFailover

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070364#comment-15070364
 ] 

Hudson commented on HBASE-15030:


FAILURE: Integrated in HBase-Trunk_matrix #581 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/581/])
HBASE-15030 Deadlock in master TableNamespaceManager while running 
(matteo.bertozzi: rev 8e0854c64be553595b8ed44b9856a3d74ad3005f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java


> Deadlock in master TableNamespaceManager while running 
> IntegrationTestDDLMasterFailover
> ---
>
> Key: HBASE-15030
> URL: https://issues.apache.org/jira/browse/HBASE-15030
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15030-v0.patch
>
>
> I was running IntegrationTestDDLMasterFailover on distributed cluster when i 
> notice this. Here is relevant part of master's jstack:
> {code}
> "ProcedureExecutor-1" daemon prio=10 tid=0x7fd2d407f800 nid=0x3332 
> waiting for monitor entry [0x7fd2c2834000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.releaseExclusiveLock(TableNamespaceManager.java:157)
> - waiting to lock <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:216)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:842)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> "ProcedureExecutor-3" daemon prio=10 tid=0x7fd2d41e5800 nid=0x3334 
> waiting on condition [0x7fd2c2632000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:929)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1245)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.acquireExclusiveLock(TableNamespaceManager.java:150)
> - locked <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:210)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:941)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:821)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - None
> Found one Java-level deadlock:
> =
> "ProcedureExecutor-3":
>   waiting for ownable synchronizer 0x00072574b330, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "ProcedureExecutor-1"
> "ProcedureExecutor-1":
>   waiting to lock monitor 0x7fd2cc328908 (object 0x000725c36a48, a 
> org.apache.hadoop.hbase.master.TableNamespaceManager),
>   which is held by "ProcedureExecutor-3"
> Java stack information for the threads listed above:
> ===
> "ProcedureExecutor-3":
> at 

[jira] [Commented] (HBASE-14635) Reenable TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070542#comment-15070542
 ] 

Hadoop QA commented on HBASE-14635:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779161/HBASE-14635-master-v2.patch
  against master branch at commit 04de427e57d144caf5a9cde3664dac780ed763ab.
  ATTACHMENT ID: 12779161

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17005//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17005//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17005//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17005//console

This message is automatically generated.

> Reenable TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent
> --
>
> Key: HBASE-14635
> URL: https://issues.apache.org/jira/browse/HBASE-14635
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: stack
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14635-master-v2.patch, HBASE-14635-master.patch
>
>
> Was disabled in the parent issue because flakey. This issue is about 
> reenabling it after figuring why its flakey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070553#comment-15070553
 ] 

Ted Yu commented on HBASE-15032:


lgtm

Consider re-attaching for another QA run

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070573#comment-15070573
 ] 

Hadoop QA commented on HBASE-15035:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779347/HBASE-15035-v2.patch
  against master branch at commit 04de427e57d144caf5a9cde3664dac780ed763ab.
  ATTACHMENT ID: 12779347

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint
  
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17006//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17006//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17006//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17006//console

This message is automatically generated.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14987) Compaction marker whose region name doesn't match current region's needs to be handled

2015-12-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14987:
---
Attachment: 14987-v4.txt

Good catch, Enis.

Patch v4 addresses your comment.


> Compaction marker whose region name doesn't match current region's needs to 
> be handled
> --
>
> Key: HBASE-14987
> URL: https://issues.apache.org/jira/browse/HBASE-14987
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Stephen Yuan Jiang
> Attachments: 14987-suggest.txt, 14987-v1.txt, 14987-v2.txt, 
> 14987-v2.txt, 14987-v3.txt, 14987-v4.txt
>
>
> One customer encountered the following error when replaying recovered edits, 
> leading to region open failure:
> {code}
> region=table1,d6b-2282-9223370590058224807-U-9856557-
> EJ452727-16313786400171,1449616291799.fa8a526f2578eb3630bb08a4b1648f5d., 
> starting to roll back the global memstore   size.
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Compaction marker 
> from WAL table_name: "table1"
> encoded_region_name: "d389c70fde9ec07971d0cfd20ef8f575"
> ...
> region_name: 
> "table1,d6b-2282-9223370590058224807-U-9856557-EJ452727-16313786400171,1449089609367.d389c70fde9ec07971d0cfd20ef8f575."
>  targetted for region d389c70fde9ec07971d0cfd20ef8f575 does not match this 
> region: {ENCODED => fa8a526f2578eb3630bb08a4b1648f5d, NAME => 
> 'table1,d6b-2282-
> 9223370590058224807-U-9856557-EJ452727-16313786400171,1449616291799.fa8a526f2578eb3630bb08a4b1648f5d.',
>  STARTKEY => 'd6b-2282-9223370590058224807-U-9856557-EJ452727- 
> 16313786400171', ENDKEY => 
> 'd76-2553-9223370588576178807-U-7416904-EK875822-1766218060'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkTargetRegion(HRegion.java:4592)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayWALCompactionMarker(HRegion.java:3831)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:3747)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3601)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:911)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:789)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:762)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5774)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5744)
> {code}
> This was likely caused by the following action of hbck:
> {code}
> 15/12/08 18:11:34 INFO util.HBaseFsck: [hbasefsck-pool1-t37] Moving files 
> from 
> hdfs://Zealand/hbase/data/default/table1/d389c70fde9ec07971d0cfd20ef8f575/recovered.edits
>  into containing region 
> hdfs://Zealand/hbase/data/default/table1/fa8a526f2578eb3630bb08a4b1648f5d/recovered.edits
> {code}
> The recovered.edits for d389c70fde9ec07971d0cfd20ef8f575 contained compaction 
> marker which couldn't be replayed against fa8a526f2578eb3630bb08a4b1648f5d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070481#comment-15070481
 ] 

Hadoop QA commented on HBASE-15034:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779331/HBASE-15034-v1.patch
  against master branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779331

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMaster
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  org.apache.hadoop.hbase.security.access.TestAccessController2
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken
  org.apache.hadoop.hbase.security.access.TestCellACLs
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.security.access.TestNamespaceCommands
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17000//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17000//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17000//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17000//console

This message is automatically generated.

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15034-v1.patch, HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070480#comment-15070480
 ] 

Hadoop QA commented on HBASE-15032:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779333/HBASE-15032-v002.patch
  against master branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779333

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMaster
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  org.apache.hadoop.hbase.security.access.TestAccessController2
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken
  org.apache.hadoop.hbase.security.access.TestCellACLs
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.security.access.TestNamespaceCommands
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17001//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17001//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17001//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17001//console

This message is automatically generated.

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   

[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070501#comment-15070501
 ] 

Anoop Sam John commented on HBASE-14940:


bq.org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken
Test not related to this patch.

Will commit to 0.98+ branches.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070514#comment-15070514
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.3 #465 (See 
[https://builds.apache.org/job/HBase-1.3/465/])
Revert "HBASE-15018 Inconsistent way of handling TimeoutException in the 
(stack: rev fdeca854ec8aa842136fb4f93f81895113e5b70e)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Updated] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14938:
---
Summary: Limit the number of znodes for ZK in bulk loaded hfile replication 
 (was: Limit to and fro requests size from ZK in bulk loaded hfile replication)

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070583#comment-15070583
 ] 

Matteo Bertozzi commented on HBASE-15035:
-

TestLoadIncrementalHFilesUseSecurityEndPoint and 
TestSecureLoadIncrementalHFiles have their own setUpBeforeClass so the rpc 
codec get lost, you need to apply that to the 2 classes. and then it should be 
ok. +1 after that

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15019) Replication stuck when HDFS is restarted

2015-12-23 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15019:

Attachment: HBASE-15019-v0_branch-1.2.patch

> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-15019-v0_branch-1.2.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is not expected. Will retry
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1547065147-10.51.30.152-1446756937665:blk_1073801614_61243; 
> getBlockSize()=83; corrupt=false; offset=0; locs=[10.51.30.154:50010, 
> 10.51.30.152:50010, 10.51.30.155:50010]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:358)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:300)
>   at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1448)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:301)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:297)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:297)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:116)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:508)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:321)
> {noformat}
> the only way to trigger a WAL recovery is to restart and force the master to 
> trigger the lease recovery on WAL split. 
> but there is a case where restarting will not help. If the RS keeps going 
> rolling and flushing the unclosed WAL will be moved in the archive, and at 
> that point the master will never try to do a lease recovery on it. 
> since we know that the RS is still going, should we try to recover the lease 
> on 

[jira] [Updated] (HBASE-14635) Reenable TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent

2015-12-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14635:
-
Status: Patch Available  (was: Open)

> Reenable TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent
> --
>
> Key: HBASE-14635
> URL: https://issues.apache.org/jira/browse/HBASE-14635
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: stack
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14635-master-v2.patch, HBASE-14635-master.patch
>
>
> Was disabled in the parent issue because flakey. This issue is about 
> reenabling it after figuring why its flakey.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14279) Race condition in ConcurrentIndex

2015-12-23 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070375#comment-15070375
 ] 

Hiroshi Ikeda commented on HBASE-14279:
---

I worry about whether the hash calculation is well orthogonal with 
HashMap.hash() so that objects are well-distributed among the entries in each 
internal map after the objects are distributed into the internal maps. The new 
calculation seems coming from JDK 1.4.

As for the variant of single-word Wang/Jenkins hash, in 
http://gee.cs.oswego.edu/dl/concurrency-interest/index.html
{quote}
Sources for all classes originated by the JSR166 group are released to the 
public domain, as described at 
http://creativecommons.org/licenses/publicdomain. This includes all code in 
java.util.concurrent and its subpackages (except CopyOnWriteArrayList), ...
{quote}

and Doug Lea introduced the hash calculation code at the revision 1.93 in his 
repository:
http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/ConcurrentHashMap.java?revision=1.93

> Race condition in ConcurrentIndex
> -
>
> Key: HBASE-14279
> URL: https://issues.apache.org/jira/browse/HBASE-14279
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-14279.patch, HBASE-14279_v2.patch, 
> HBASE-14279_v3.patch, HBASE-14279_v4.patch, HBASE-14279_v5.patch, 
> HBASE-14279_v5.patch, HBASE-14279_v6.patch, HBASE-14279_v7.1.patch, 
> HBASE-14279_v7.patch, LockStripedBag.java
>
>
> {{ConcurrentIndex.put}} and {{remove}} are in race condition. It is possible 
> to remove a non-empty set, and to add a value to a removed set. Also 
> {{ConcurrentIndex.values}} is vague in sense that the returned set sometimes 
> trace the current state and sometimes doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070386#comment-15070386
 ] 

Ted Yu commented on HBASE-15035:


lgtm, if tests pass

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-23 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated HBASE-14822:
-
Attachment: HBASE-14822_98_nextseq.diff

[~lhofhansl] - I tried out the latest on 0.98, and looks like there are some 
more issues lurking with lease renewal. I noticed that on the region server 
side I was still getting the following message even though I made sure Phoenix 
was calling renewLease() for the scanners.

INFO  [RS:0;localhost:55383.leaseChecker] 
org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener(2633): 
Scanner 59 lease expired on region 

After a bit of digging around, it turns out that the lease renewal is actually 
causing the regular scan() to fail and vice-versa. This is because renewLease 
ends up also increasing the nextCallSeq member variable in the ScannerCallable 
object. There are checks in place in the HRegionServer class that causes an 
OutOfOrderScannerNextException to be thrown because the nextSeq didn't match. 

See this stacktrace:

org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
nextCallSeq: 2 But the nextCallSeq got from client: 10; request=scanner_id: 56 
number_of_rows: 2 close_scanner: false next_call_seq: 10 renew: false
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3277)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31190)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:298)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:216)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:58)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:387)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:340)
at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:57)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:112)
at 

In this case the number of times renewLease() was called was 8 which also 
happens to be the difference between the expected nextCallSeq (2) and the 
actual nextCallSeq(10). This error isn't surfaced to the clients though because 
the HBase client ends up creating a new scanner altogether behind the scenes. 

One possible simple fix (in the attached patch) would be to not increment the 
nextCallSeq when renewing lease. FWIW, after this change, I no longer see the 
OutOfOrderScannerNextException and INFO message about scanner lease expiration 
is also gone.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15019) Replication stuck when HDFS is restarted

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070509#comment-15070509
 ] 

Hadoop QA commented on HBASE-15019:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779343/HBASE-15019-v0_branch-1.2.patch
  against branch-1.2 branch at commit 04de427e57d144caf5a9cde3664dac780ed763ab.
  ATTACHMENT ID: 12779343

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17004//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17004//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17004//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17004//console

This message is automatically generated.

> Replication stuck when HDFS is restarted
> 
>
> Key: HBASE-15019
> URL: https://issues.apache.org/jira/browse/HBASE-15019
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, wal
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Attachments: HBASE-15019-v0_branch-1.2.patch
>
>
> RS is normally working and writing on the WAL.
> HDFS is killed and restarted, and the RS try to do a roll.
> The close fail, but the roll succeed (because hdfs is now up) and everything 
> works.
> {noformat}
> 2015-12-11 21:52:28,058 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Got IOException 
> while writing trailer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Failed close of HLog writer
> java.io.IOException: All datanodes 10.51.30.152:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1147)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:945)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:496)
> 2015-12-11 21:52:28,059 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> Riding over HLog close failure! error count=1
> {noformat}
> The problem is on the replication side. that log we rolled and we were not 
> able to close
> is waiting for a lease recovery.
> {noformat}
> 2015-12-11 21:16:31,909 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Can't open after 267 
> attempts and 301124ms 
> {noformat}
> the WALFactory notify us about that, but there is nothing on the RS side that 
> perform the WAL recovery.
> {noformat}
> 2015-12-11 21:11:30,921 WARN 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory: Lease should have 
> recovered. This is 

[jira] [Updated] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-23 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14684:
--
Attachment: HBASE-14684-branch-1.2_v1.patch

retry.  failed testcase due to HBASE-15018,  it has been reverted.

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070554#comment-15070554
 ] 

Hudson commented on HBASE-15018:


SUCCESS: Integrated in HBase-1.2-IT #362 (See 
[https://builds.apache.org/job/HBase-1.2-IT/362/])
Revert "HBASE-15018 Inconsistent way of handling TimeoutException in (stack: 
rev 3ba99074083f1d3d3b189a9603c73aa4713a65d8)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070555#comment-15070555
 ] 

Hudson commented on HBASE-14940:


SUCCESS: Integrated in HBase-1.2-IT #362 (See 
[https://builds.apache.org/job/HBase-1.2-IT/362/])
HBASE-14940 Make our unsafe based ops more safe. (anoopsamjohn: rev 
9b459cddb791832873cfc9f84e6dcfc0484be617)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: HBASE-15035-v2.patch

v2 cleaned up the patch.  with HBASE-15018 reverted, this should hopefully pass.

I make the includeTags explicitly false instead of implicitly false in 
HFileContextBuilder, and explicitly make it true when we create the new hfiles 
from the HalfStoreFiles in the LoadIncrementalHFiles.






> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070482#comment-15070482
 ] 

Hadoop QA commented on HBASE-14684:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779337/HBASE-14684-branch-1.2_v1.patch
  against branch-1.2 branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779337

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 100 
new or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17003//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17003//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17003//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17003//console

This message is automatically generated.

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1_v1.patch, HBASE-14684-branch-1_v2.patch, 
> HBASE-14684-branch-1_v3.patch, HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14279) Race condition in ConcurrentIndex

2015-12-23 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070496#comment-15070496
 ] 

Hiroshi Ikeda commented on HBASE-14279:
---

I misunderstood for some reason and the same thing can be said for the variant 
of single-word Wang/Jenkins hash. That's deep-rooted :(

> Race condition in ConcurrentIndex
> -
>
> Key: HBASE-14279
> URL: https://issues.apache.org/jira/browse/HBASE-14279
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-14279.patch, HBASE-14279_v2.patch, 
> HBASE-14279_v3.patch, HBASE-14279_v4.patch, HBASE-14279_v5.patch, 
> HBASE-14279_v5.patch, HBASE-14279_v6.patch, HBASE-14279_v7.1.patch, 
> HBASE-14279_v7.patch, LockStripedBag.java
>
>
> {{ConcurrentIndex.put}} and {{remove}} are in race condition. It is possible 
> to remove a non-empty set, and to add a value to a removed set. Also 
> {{ConcurrentIndex.values}} is vague in sense that the returned set sometimes 
> trace the current state and sometimes doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15038) ExportSnapshot should support separate configurations for source and destination clusters

2015-12-23 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-15038:
-

 Summary: ExportSnapshot should support separate configurations for 
source and destination clusters
 Key: HBASE-15038
 URL: https://issues.apache.org/jira/browse/HBASE-15038
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce, snapshots
Reporter: Gary Helmling
Assignee: Gary Helmling


Currently ExportSnapshot uses a single Configuration instance for both the 
source and destination FileSystem instances to use.  It should allow overriding 
properties for each filesystem connection separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14279) Race condition in ConcurrentIndex

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070502#comment-15070502
 ] 

Hadoop QA commented on HBASE-14279:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779334/HBASE-14279_v7.1.patch
  against master branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779334

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorEndpoint
  org.apache.hadoop.hbase.master.TestMaster
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.util.TestHBaseFsckReplicas
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  org.apache.hadoop.hbase.client.TestMultiParallel
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.client.TestSnapshotFromClient
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS
  
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
  org.apache.hadoop.hbase.client.TestAdmin1
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestLeaseRenewal
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile
  org.apache.hadoop.hbase.quotas.TestQuotaThrottle

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17002//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17002//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17002//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17002//console

This message is automatically generated.

> Race condition in ConcurrentIndex
> -
>
> Key: HBASE-14279
> URL: https://issues.apache.org/jira/browse/HBASE-14279
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-14279.patch, HBASE-14279_v2.patch, 
> HBASE-14279_v3.patch, HBASE-14279_v4.patch, HBASE-14279_v5.patch, 
> HBASE-14279_v5.patch, HBASE-14279_v6.patch, HBASE-14279_v7.1.patch, 
> HBASE-14279_v7.patch, LockStripedBag.java
>
>
> 

[jira] [Updated] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15031:
--
Release Note: 
Increments can be 10x slower (or more) when there is high concurrency since 
HBase 1.0.0 (HBASE-8763).

This 'fix' adds back a fast increment but speed is achieved by relaxing 
row-level consistency for Increments (only). The default remains the old, slow, 
consistent Increment behavior.

Set  "hbase.increment.fast.but.narrow.consistency" to true in hbase-site.xml to 
enable 'fast' increments and then rolling restart your cluster. This is a 
setting the server-side needs to read.

Intermixing fast increment with other Mutations will give indeterminate 
results; e.g. a Put and Increment against the same Cell will not always give 
you the result you expect. Fast Increments are consistent unto themselves. A 
Get with {@link IsolationLevel#READ_UNCOMMITTED} will return the latest 
increment value or an Increment of an amount zero will do the same (beware 
doing Get on a cell that has not been incremented yet -- this will return no 
results).

The difference between fastAndNarrowConsistencyIncrement and 
slowButConsistentIncrement is that the former holds the row lock until the WAL 
sync completes; this allows us to reason that there are no other writers afoot 
when we read the current increment value. In this case we do not need to wait 
on mvcc reads to catch up to writes before we proceed with the read of the 
current Increment value, the root of the slowdown seen in HBASE-14460. The 
fast-path also does not wait on mvcc to complete before returning to the client 
(but the write has been synced and put into memstore before we return). 

Also adds a simple performance test tool that will run against existing 
cluster: 

{code}
$ ./bin/hbase org.apache.hadoop.hbase.IncrementPerformanceTest
{code]

Configure it by passing -D options. Here are the set below:

2015-12-23 19:33:36,941 INFO  [main] hbase.IncrementPerformanceTest: Running 
test with hbase.zookeeper.quorum=localhost, tableName=tableName, 
columnFamilyName=[B@610ac287, threadCount=80, incrementCount=1

... so to set the tableName pass -DtableName=SOME_TABLENAME

  was:
Increments can be 10x slower (or more) when there is high concurrency since 
HBase 1.0.0 (HBASE-8763). This feature adds back a fast increment but speed is 
achieved by relaxing row-level consistency for Increments (only). The default 
remains the old, slow, consistent Increment behavior.

Set  "hbase.increment.fast.but.narrow.consistency" to true in hbase-site.xml to 
enable 'fast' increments and then rolling restart your cluster. This is a 
setting the server-side needs to read.

Intermixing fast increment with other Mutations will give indeterminate 
results; e.g. a Put and Increment against the same Cell will not always give 
you the result you expect. Fast Increments are consistent unto themselves. A 
Get with {@link IsolationLevel#READ_UNCOMMITTED} will return the latest 
increment value or an Increment of an amount zero will do the same (beware 
doing Get on a cell that has not been incremented yet -- this will return no 
results).

The difference between fastAndNarrowConsistencyIncrement and 
slowButConsistentIncrement is that the former holds the row lock until the WAL 
sync completes; this allows us to reason that there are no other writers afoot 
when we read the current increment value. In this case we do not need to wait 
on mvcc reads to catch up to writes before we proceed with the read of the 
current Increment value, the root of the slowdown seen in HBASE-14460. The 
fast-path also does not wait on mvcc to complete before returning to the client 
(but the write has been synced and put into memstore before we return). 


> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Status: Open  (was: Patch Available)

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15027:
---
Attachment: HBASE-15027_3.patch

Resubmitting for QA. Not sure why these many tests failed.

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14987) Compaction marker whose region name doesn't match current region's needs to be handled

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070588#comment-15070588
 ] 

Hadoop QA commented on HBASE-14987:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779351/14987-v4.txt
  against master branch at commit 04de427e57d144caf5a9cde3664dac780ed763ab.
  ATTACHMENT ID: 12779351

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17007//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17007//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17007//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17007//console

This message is automatically generated.

> Compaction marker whose region name doesn't match current region's needs to 
> be handled
> --
>
> Key: HBASE-14987
> URL: https://issues.apache.org/jira/browse/HBASE-14987
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Stephen Yuan Jiang
> Attachments: 14987-suggest.txt, 14987-v1.txt, 14987-v2.txt, 
> 14987-v2.txt, 14987-v3.txt, 14987-v4.txt
>
>
> One customer encountered the following error when replaying recovered edits, 
> leading to region open failure:
> {code}
> region=table1,d6b-2282-9223370590058224807-U-9856557-
> EJ452727-16313786400171,1449616291799.fa8a526f2578eb3630bb08a4b1648f5d., 
> starting to roll back the global memstore   size.
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Compaction marker 
> from WAL table_name: "table1"
> encoded_region_name: "d389c70fde9ec07971d0cfd20ef8f575"
> ...
> region_name: 
> "table1,d6b-2282-9223370590058224807-U-9856557-EJ452727-16313786400171,1449089609367.d389c70fde9ec07971d0cfd20ef8f575."
>  targetted for region d389c70fde9ec07971d0cfd20ef8f575 does not match this 
> region: {ENCODED => fa8a526f2578eb3630bb08a4b1648f5d, NAME => 
> 'table1,d6b-2282-
> 9223370590058224807-U-9856557-EJ452727-16313786400171,1449616291799.fa8a526f2578eb3630bb08a4b1648f5d.',
>  STARTKEY => 'd6b-2282-9223370590058224807-U-9856557-EJ452727- 
> 16313786400171', ENDKEY => 
> 'd76-2553-9223370588576178807-U-7416904-EK875822-1766218060'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkTargetRegion(HRegion.java:4592)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayWALCompactionMarker(HRegion.java:3831)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:3747)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3601)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:911)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:789)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:762)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5774)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5744)
> {code}
> This was likely caused by the following action of hbck:
> {code}
> 15/12/08 18:11:34 INFO util.HBaseFsck: [hbasefsck-pool1-t37] Moving files 
> from 
> 

[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14355:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Adds being able to Scan each column family with a different 
time range. Adds new methods setColumnFamilyTimeRange and 
getColumnFamilyTimeRange to Scan.
  Status: Resolved  (was: Patch Available)

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
>  Labels: needs_releasenote
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-addendum.patch, HBASE-14355-v1.patch, 
> HBASE-14355-v10.patch, HBASE-14355-v11.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15037) CopyTable and VerifyReplication - Option to specify batch size, versions

2015-12-23 Thread Ramana Uppala (JIRA)
Ramana Uppala created HBASE-15037:
-

 Summary: CopyTable and VerifyReplication - Option to specify batch 
size, versions
 Key: HBASE-15037
 URL: https://issues.apache.org/jira/browse/HBASE-15037
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Affects Versions: 0.98.16.1
Reporter: Ramana Uppala
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15030) Deadlock in master TableNamespaceManager while running IntegrationTestDDLMasterFailover

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070300#comment-15070300
 ] 

Hudson commented on HBASE-15030:


FAILURE: Integrated in HBase-1.3-IT #401 (See 
[https://builds.apache.org/job/HBase-1.3-IT/401/])
HBASE-15030 Deadlock in master TableNamespaceManager while running 
(matteo.bertozzi: rev d65210d2138a59b91aef6443b6b26435a27a587a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java


> Deadlock in master TableNamespaceManager while running 
> IntegrationTestDDLMasterFailover
> ---
>
> Key: HBASE-15030
> URL: https://issues.apache.org/jira/browse/HBASE-15030
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15030-v0.patch
>
>
> I was running IntegrationTestDDLMasterFailover on distributed cluster when i 
> notice this. Here is relevant part of master's jstack:
> {code}
> "ProcedureExecutor-1" daemon prio=10 tid=0x7fd2d407f800 nid=0x3332 
> waiting for monitor entry [0x7fd2c2834000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.releaseExclusiveLock(TableNamespaceManager.java:157)
> - waiting to lock <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:216)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:842)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> "ProcedureExecutor-3" daemon prio=10 tid=0x7fd2d41e5800 nid=0x3334 
> waiting on condition [0x7fd2c2632000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:929)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1245)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.acquireExclusiveLock(TableNamespaceManager.java:150)
> - locked <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:210)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:941)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:821)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - None
> Found one Java-level deadlock:
> =
> "ProcedureExecutor-3":
>   waiting for ownable synchronizer 0x00072574b330, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "ProcedureExecutor-1"
> "ProcedureExecutor-1":
>   waiting to lock monitor 0x7fd2cc328908 (object 0x000725c36a48, a 
> org.apache.hadoop.hbase.master.TableNamespaceManager),
>   which is held by "ProcedureExecutor-3"
> Java stack information for the threads listed above:
> ===
> "ProcedureExecutor-3":
> at sun.misc.Unsafe.park(Native Method)
> 

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070301#comment-15070301
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.3-IT #401 (See 
[https://builds.apache.org/job/HBase-1.3-IT/401/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev 59cca6297f9fcecec6aaeecb760ae7f27b0d0e29)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070303#comment-15070303
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.1-JDK8 #1714 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1714/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev f0206368615d1fa136edfb7c20cb90e8d52b6d02)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Updated] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15018:
--
Attachment: HBASE-15018.patch

Rerunning patch... to see if it shows up as broke now false positives has been 
fixed upstream.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message was 

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070319#comment-15070319
 ] 

stack commented on HBASE-15018:
---

Reverted from all branches... branch-1.1+

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070099#comment-15070099
 ] 

Hadoop QA commented on HBASE-14940:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779274/HBASE-14940_branch-1.patch
  against branch-1 branch at commit e00a04df10de70b029a2d1f115f97f9d79a05c6a.
  ATTACHMENT ID: 12779274

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16994//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16994//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16994//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16994//console

This message is automatically generated.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Summary: bulkloading hfiles with tags that require splits do not preserve 
tags  (was: bulkloading hfiles with tags that require splits does not preserve 
tags)

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Priority: Blocker
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Affects Version/s: 1.3.0
   1.2.0
   0.98.0

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Priority: Blocker
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15037) CopyTable and VerifyReplication - Option to specify batch size, versions

2015-12-23 Thread Ramana Uppala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Uppala updated HBASE-15037:
--
Description: Need option to specify batch size for CopyTable and 
VerifyReplication. 

> CopyTable and VerifyReplication - Option to specify batch size, versions
> 
>
> Key: HBASE-15037
> URL: https://issues.apache.org/jira/browse/HBASE-15037
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 0.98.16.1
>Reporter: Ramana Uppala
>Priority: Minor
>
> Need option to specify batch size for CopyTable and VerifyReplication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15037) CopyTable and VerifyReplication - Option to specify batch size, versions

2015-12-23 Thread Ramana Uppala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramana Uppala updated HBASE-15037:
--
Description: Need option to specify batch size for CopyTable and 
VerifyReplication.  We are working on patch for this.  (was: Need option to 
specify batch size for CopyTable and VerifyReplication. )

> CopyTable and VerifyReplication - Option to specify batch size, versions
> 
>
> Key: HBASE-15037
> URL: https://issues.apache.org/jira/browse/HBASE-15037
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 0.98.16.1
>Reporter: Ramana Uppala
>Priority: Minor
>
> Need option to specify batch size for CopyTable and VerifyReplication.  We 
> are working on patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070274#comment-15070274
 ] 

Matteo Bertozzi commented on HBASE-15034:
-

+1

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15034-v1.patch, HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14279) Race condition in ConcurrentIndex

2015-12-23 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14279:
--
Attachment: HBASE-14279_v7.1.patch

Fix check style errors in v7.1
Any Suggestions?

> Race condition in ConcurrentIndex
> -
>
> Key: HBASE-14279
> URL: https://issues.apache.org/jira/browse/HBASE-14279
> Project: HBase
>  Issue Type: Bug
>Reporter: Hiroshi Ikeda
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-14279.patch, HBASE-14279_v2.patch, 
> HBASE-14279_v3.patch, HBASE-14279_v4.patch, HBASE-14279_v5.patch, 
> HBASE-14279_v5.patch, HBASE-14279_v6.patch, HBASE-14279_v7.1.patch, 
> HBASE-14279_v7.patch, LockStripedBag.java
>
>
> {{ConcurrentIndex.put}} and {{remove}} are in race condition. It is possible 
> to remove a non-empty set, and to add a value to a removed set. Also 
> {{ConcurrentIndex.values}} is vague in sense that the returned set sometimes 
> trace the current state and sometimes doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-15018:
---

I pushed this but it seems to be cause following failures... 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/468/

Backing it out for now.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message 

[jira] [Commented] (HBASE-14717) Enable_table_replication should not create table in peer cluster if specified few tables added in peer

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070128#comment-15070128
 ] 

Hadoop QA commented on HBASE-14717:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779268/HBASE-14717%282%29.patch
  against master branch at commit e00a04df10de70b029a2d1f115f97f9d79a05c6a.
  ATTACHMENT ID: 12779268

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMaster
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  org.apache.hadoop.hbase.security.access.TestAccessController2
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken
  org.apache.hadoop.hbase.security.access.TestCellACLs
  
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.security.access.TestNamespaceCommands
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16993//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16993//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16993//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16993//console

This message is automatically generated.

> Enable_table_replication should not create table in peer cluster if specified 
> few tables added in peer
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit to and fro requests size from ZK in bulk loaded hfile replication

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070181#comment-15070181
 ] 

Hadoop QA commented on HBASE-14938:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779273/HBASE-14938.patch
  against master branch at commit e00a04df10de70b029a2d1f115f97f9d79a05c6a.
  ATTACHMENT ID: 12779273

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile
  org.apache.hadoop.hbase.client.TestMultiParallel
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorEndpoint
  org.apache.hadoop.hbase.client.TestLeaseRenewal
  org.apache.hadoop.hbase.client.TestSnapshotFromClient
  
org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestScannersFromClientSide
  org.apache.hadoop.hbase.client.TestAdmin1
  
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestMetaWithReplicas
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16995//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16995//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16995//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16995//console

This message is automatically generated.

> Limit to and fro requests size from ZK in bulk loaded hfile replication
> ---
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14355) Scan different TimeRange for each column family

2015-12-23 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14355:
-
Fix Version/s: 1.2.0

> Scan different TimeRange for each column family
> ---
>
> Key: HBASE-14355
> URL: https://issues.apache.org/jira/browse/HBASE-14355
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, regionserver, Scanners
>Reporter: Dave Latham
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14355-addendum.patch, HBASE-14355-v1.patch, 
> HBASE-14355-v10.patch, HBASE-14355-v11.patch, HBASE-14355-v2.patch, 
> HBASE-14355-v3.patch, HBASE-14355-v4.patch, HBASE-14355-v5.patch, 
> HBASE-14355-v6.patch, HBASE-14355-v7.patch, HBASE-14355-v8.patch, 
> HBASE-14355-v9.patch, HBASE-14355.branch-1.patch, HBASE-14355.patch
>
>
> At present the Scan API supports only table level time range. We have 
> specific use cases that will benefit from per column family time range. (See 
> background discussion at 
> https://mail-archives.apache.org/mod_mbox/hbase-user/201508.mbox/%3ccaa4mzom00ef5eoxstk0hetxeby8mqss61gbvgttgpaspmhq...@mail.gmail.com%3E)
> There are a couple of choices that would be good to validate.  First - how to 
> update the Scan API to support family and table level updates.  One proposal 
> would be to add Scan.setTimeRange(byte family, long minTime, long maxTime), 
> then store it in a Map.  When executing the scan, if a 
> family has a specified TimeRange, then use it, otherwise fall back to using 
> the table level TimeRange.  Clients using the new API against old region 
> servers would not get the families correctly filterd.  Old clients sending 
> scans to new region servers would work correctly.
> The other question is how to get StoreFileScanner.shouldUseScanner to match 
> up the proper family and time range.  It has the Scan available but doesn't 
> currently have available which family it is a part of.  One option would be 
> to try to pass down the column family in each constructor path.  Another 
> would be to instead alter shouldUseScanner to pass down the specific 
> TimeRange to use (similar to how it currently passes down the columns to use 
> which also appears to be a workaround for not having the family available). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-23 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14684:
--
Attachment: HBASE-14684-branch-1.2_v1.patch

patch for branch-1.2

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1_v1.patch, HBASE-14684-branch-1_v2.patch, 
> HBASE-14684-branch-1_v3.patch, HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070314#comment-15070314
 ] 

Hadoop QA commented on HBASE-15035:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779307/HBASE-15035.patch
  against master branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779307

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorEndpoint
  org.apache.hadoop.hbase.master.TestMaster
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.util.TestHBaseFsckReplicas
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  org.apache.hadoop.hbase.master.TestMasterFailover
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  org.apache.hadoop.hbase.client.TestMultiParallel
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  org.apache.hadoop.hbase.client.TestSnapshotFromClient
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS
  
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
  org.apache.hadoop.hbase.client.TestAdmin1
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.regionserver.TestMobStoreScanner
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestLeaseRenewal
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile
  org.apache.hadoop.hbase.quotas.TestQuotaThrottle

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16999//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16999//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16999//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16999//console

This message is automatically generated.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>

[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070110#comment-15070110
 ] 

Hudson commented on HBASE-15018:


FAILURE: Integrated in HBase-1.2-IT #361 (See 
[https://builds.apache.org/job/HBase-1.2-IT/361/])
HBASE-15018 Inconsistent way of handling TimeoutException in the rpc (stack: 
rev f9000d836d49192fe1305db420f103dcd2b33b76)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 

[jira] [Commented] (HBASE-15005) Use value array in computing block length for 1.2 and 1.3

2015-12-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070130#comment-15070130
 ] 

stack commented on HBASE-15005:
---

Why was this change made? Having trouble understanding. Thanks.

> Use value array in computing block length for 1.2 and 1.3
> -
>
> Key: HBASE-15005
> URL: https://issues.apache.org/jira/browse/HBASE-15005
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-15005.patch
>
>
> Follow on to HBASE-14978



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15036) Update HBase Spark documentation to include bulk load with thin records

2015-12-23 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-15036:

Attachment: HBASE-15036.patch

First Draft

> Update HBase Spark documentation to include bulk load with thin records
> ---
>
> Key: HBASE-15036
> URL: https://issues.apache.org/jira/browse/HBASE-15036
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-15036.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13639) SyncTable - rsync for HBase tables

2015-12-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13639:
--
  Labels: tooling  (was: needs_releasenote)
Release Note: 
Tool to sync two tables that tries to send the differences only like rsync.

Adds two new MapReduce jobs, SyncTable and HashTable. See usage for these jobs 
on how to use. See design doc for generally overview: 
https://docs.google.com/document/d/1-2c9kJEWNrXf5V4q_wBcoIXfdchN7Pxvxv1IO6PW0-U/edit

>From comments below, "It can be challenging to run against a table getting 
>live writes, if those writes are updates/overwrites. In general, you can run 
>it against a time range to ignore new writes, but if those writes update 
>existing cells, then the time range scan may or may not see older versions of 
>those cells depending on whether major compaction has happened, which may be 
>different in remote clusters."

> SyncTable - rsync for HBase tables
> --
>
> Key: HBASE-13639
> URL: https://issues.apache.org/jira/browse/HBASE-13639
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, Operability, tooling
>Reporter: Dave Latham
>Assignee: Dave Latham
>  Labels: tooling
> Fix For: 2.0.0, 0.98.14, 1.2.0
>
> Attachments: HBASE-13639-0.98-addendum-hadoop-1.patch, 
> HBASE-13639-0.98.patch, HBASE-13639-v1.patch, HBASE-13639-v2.patch, 
> HBASE-13639-v3-0.98.patch, HBASE-13639-v3.patch, HBASE-13639.patch
>
>
> Given HBase tables in remote clusters with similar but not identical data, 
> efficiently update a target table such that the data in question is identical 
> to a source table.  Efficiency in this context means using far less network 
> traffic than would be required to ship all the data from one cluster to the 
> other.  Takes inspiration from rsync.
> Design doc: 
> https://docs.google.com/document/d/1-2c9kJEWNrXf5V4q_wBcoIXfdchN7Pxvxv1IO6PW0-U/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15030) Deadlock in master TableNamespaceManager while running IntegrationTestDDLMasterFailover

2015-12-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070307#comment-15070307
 ] 

Hudson commented on HBASE-15030:


FAILURE: Integrated in HBase-1.3 #464 (See 
[https://builds.apache.org/job/HBase-1.3/464/])
HBASE-15030 Deadlock in master TableNamespaceManager while running 
(matteo.bertozzi: rev d65210d2138a59b91aef6443b6b26435a27a587a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java


> Deadlock in master TableNamespaceManager while running 
> IntegrationTestDDLMasterFailover
> ---
>
> Key: HBASE-15030
> URL: https://issues.apache.org/jira/browse/HBASE-15030
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15030-v0.patch
>
>
> I was running IntegrationTestDDLMasterFailover on distributed cluster when i 
> notice this. Here is relevant part of master's jstack:
> {code}
> "ProcedureExecutor-1" daemon prio=10 tid=0x7fd2d407f800 nid=0x3332 
> waiting for monitor entry [0x7fd2c2834000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.releaseExclusiveLock(TableNamespaceManager.java:157)
> - waiting to lock <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:216)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.releaseLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:842)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> "ProcedureExecutor-3" daemon prio=10 tid=0x7fd2d41e5800 nid=0x3334 
> waiting on condition [0x7fd2c2632000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x00072574b330> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:929)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1245)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.acquireExclusiveLock(TableNamespaceManager.java:150)
> - locked <0x000725c36a48> (a 
> org.apache.hadoop.hbase.master.TableNamespaceManager)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:210)
> at 
> org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.acquireLock(CreateNamespaceProcedure.java:43)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:941)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:821)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
>Locked ownable synchronizers:
> - None
> Found one Java-level deadlock:
> =
> "ProcedureExecutor-3":
>   waiting for ownable synchronizer 0x00072574b330, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "ProcedureExecutor-1"
> "ProcedureExecutor-1":
>   waiting to lock monitor 0x7fd2cc328908 (object 0x000725c36a48, a 
> org.apache.hadoop.hbase.master.TableNamespaceManager),
>   which is held by "ProcedureExecutor-3"
> Java stack information for the threads listed above:
> ===
> "ProcedureExecutor-3":
> at sun.misc.Unsafe.park(Native Method)
>   

[jira] [Commented] (HBASE-14796) Enhance the Gets in the connector

2015-12-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070326#comment-15070326
 ] 

Hadoop QA commented on HBASE-14796:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779304/HBASE-14976.patch
  against master branch at commit 8e0854c64be553595b8ed44b9856a3d74ad3005f.
  ATTACHMENT ID: 12779304

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMaster
  org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
  
org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS
  org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.client.TestScannersFromClientSide
  
org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile
  org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
  
org.apache.hadoop.hbase.master.procedure.TestModifyNamespaceProcedure
  org.apache.hadoop.hbase.client.TestSnapshotFromClient
  org.apache.hadoop.hbase.client.TestHCM
  org.apache.hadoop.hbase.master.TestHMasterRPCException
  org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries
  
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
  
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorEndpoint
  org.apache.hadoop.hbase.client.TestMetaWithReplicas
  
org.apache.hadoop.hbase.master.procedure.TestDeleteNamespaceProcedure
  org.apache.hadoop.hbase.client.TestLeaseRenewal
  org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
  
org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
  
org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient
  org.apache.hadoop.hbase.client.TestAdmin1
  
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
  
org.apache.hadoop.hbase.master.procedure.TestCreateNamespaceProcedure
  org.apache.hadoop.hbase.util.TestHBaseFsckReplicas
  
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  
org.apache.hadoop.hbase.master.handler.TestTableDeleteFamilyHandler
  org.apache.hadoop.hbase.master.TestMasterFailover

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16998//testReport/
Release Findbugs (version 2.0.3)warnings: 

[jira] [Updated] (HBASE-14796) Enhance the Gets in the connector

2015-12-23 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-14796:
---
Attachment: HBASE-14976.patch

We have use case where bulkget may consists of thousands of gets. Move BulkGet 
to executor side from driver, which will improve the  failure recovery, and 
potentially improve the performance as well when the gets number is big.

> Enhance the Gets in the connector
> -
>
> Key: HBASE-14796
> URL: https://issues.apache.org/jira/browse/HBASE-14796
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: HBASE-14976.patch
>
>
> Current the Spark-Module Spark SQL implementation gets records from HBase 
> from the driver if there is something like the following found in the SQL.
> rowkey = 123
> The reason for this original was normal sql will not have many equal 
> operations in a single where clause.
> Zhan, had brought up too points that have value.
> 1. The SQL may be generated and may have many many equal statements in it so 
> moving the work to an executor protects the driver from load
> 2. In the correct implementation the drive is connecting to HBase and 
> exceptions may cause trouble with the Spark application and not just with the 
> a single task execution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14796) Enhance the Gets in the connector

2015-12-23 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-14796:
---
Release Note: spark.hbase.bulkGetSize  in HBaseSparkConf is for grouping 
bulkGet, and default value is 1000.
  Status: Patch Available  (was: Open)

> Enhance the Gets in the connector
> -
>
> Key: HBASE-14796
> URL: https://issues.apache.org/jira/browse/HBASE-14796
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: HBASE-14976.patch
>
>
> Current the Spark-Module Spark SQL implementation gets records from HBase 
> from the driver if there is something like the following found in the SQL.
> rowkey = 123
> The reason for this original was normal sql will not have many equal 
> operations in a single where clause.
> Zhan, had brought up too points that have value.
> 1. The SQL may be generated and may have many many equal statements in it so 
> moving the work to an executor protects the driver from load
> 2. In the correct implementation the drive is connecting to HBase and 
> exceptions may cause trouble with the Spark application and not just with the 
> a single task execution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15036) Update HBase Spark documentation to include bulk load with thin records

2015-12-23 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HBASE-15036:

Attachment: HBASE-15036.1.patch

Removed extra spaces

> Update HBase Spark documentation to include bulk load with thin records
> ---
>
> Key: HBASE-15036
> URL: https://issues.apache.org/jira/browse/HBASE-15036
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ted Malaska
>Assignee: Ted Malaska
> Attachments: HBASE-15036.1.patch, HBASE-15036.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >