[jira] [Updated] (HBASE-15525) OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts

2016-05-31 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15525:
---
Attachment: HBASE-15525_V4.patch

> OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts
> --
>
> Key: HBASE-15525
> URL: https://issues.apache.org/jira/browse/HBASE-15525
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15525_V1.patch, HBASE-15525_V2.patch, 
> HBASE-15525_V3.patch, HBASE-15525_V4.patch, HBASE-15525_WIP.patch, WIP.patch
>
>
> After HBASE-13819 the system some times run out of direct memory whenever 
> there is some network congestion or some client side issues.
> This was because of pending RPCs in the RPCServer$Connection.responseQueue 
> and since all the responses in this queue hold a buffer for cellblock from 
> BoundedByteBufferPool this could takeup a lot of memory if the 
> BoundedByteBufferPool's moving average settles down towards a higher value 
> See the discussion here 
> [HBASE-13819-comment|https://issues.apache.org/jira/browse/HBASE-13819?focusedCommentId=15207822=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207822]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15721) Optimization in cloning cells into MSLAB

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309295#comment-15309295
 ] 

stack commented on HBASE-15721:
---

Can we use the CellCodecs writing to MSLAB? Like we do in RPC?

> Optimization in cloning cells into MSLAB
> 
>
> Key: HBASE-15721
> URL: https://issues.apache.org/jira/browse/HBASE-15721
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15721.patch, HBASE-15721_V2.patch
>
>
> Before cells added to memstore CSLM, there is a clone of cell after copying 
> it to MSLAB chunk area.  This is done not in an efficient way.
> {code}
> public static int appendToByteArray(final Cell cell, final byte[] output, 
> final int offset) {
> int pos = offset;
> pos = Bytes.putInt(output, pos, keyLength(cell));
> pos = Bytes.putInt(output, pos, cell.getValueLength());
> pos = appendKeyTo(cell, output, pos);
> pos = CellUtil.copyValueTo(cell, output, pos);
> if ((cell.getTagsLength() > 0)) {
>   pos = Bytes.putAsShort(output, pos, cell.getTagsLength());
>   pos = CellUtil.copyTagTo(cell, output, pos);
> }
> return pos;
>   }
> {code}
> Copied in 9 steps and we end up parsing all lengths.  When the cell 
> implementation is backed by a single byte[] (Like KeyValue) this can be done 
> in single step copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15775:
--
Status: Patch Available  (was: Open)

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.19, 0.98.16.1, 0.98.18, 0.98.17, 0.98.16, 1.2.1, 
> 0.98.15, 1.2.0, 0.98.14, 0.98.13
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
> Attachments: HBASE-15775.0.98.00.patch
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15930) Make IntegrationTestReplication's waitForReplication() smarter

2016-05-31 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-15930:
---

 Summary: Make IntegrationTestReplication's waitForReplication() 
smarter
 Key: HBASE-15930
 URL: https://issues.apache.org/jira/browse/HBASE-15930
 Project: HBase
  Issue Type: Improvement
  Components: integration tests
Reporter: Dima Spivak
Assignee: Dima Spivak


{{IntegrationTestReplication}} is a great test, but can improved by changing 
how we handle waiting between generation of the linked list on the source 
cluster and verifying the linked list on the destination cluster. [Even the 
code suggests this should be 
done|https://github.com/apache/hbase/blob/master/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestReplication.java#L251-252],
 so I'd like to take it on. [~mbertozzi] and [~busbey] have both suggested a 
simple solution wherein we write a row into each region on the source cluster 
after the linked list generation and then assume replication has gone through 
once these rows are detected on the destination cluster.

Since you lads at Facebook are some of the heaviest users, [~eclark], would you 
prefer I maintain the API and add a new command line option (say {{\-c | 
\-\-check-replication}}) that would run before any {{--generateVerifyGap}} 
sleep is carried out as it is now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-15775:
--
Attachment: HBASE-15775.0.98.00.patch

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
> Attachments: HBASE-15775.0.98.00.patch
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-15775:
--
Attachment: (was: HBASE-15775.0.98.00)

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread Vishal Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309260#comment-15309260
 ] 

Vishal Khandelwal commented on HBASE-15775:
---

Thanks [~stack] I can now.

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
> Attachments: HBASE-15775.0.98.00
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15900) RS stuck in get lock of HStore

2016-05-31 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309259#comment-15309259
 ] 

Heng Chen commented on HBASE-15900:
---

Maybe.   HDFS version is 2.5.1.   Let me try to upgrade my hdfs cluster and 
test again.   Thanks [~sergey.soldatov]

> RS stuck in get lock of HStore
> --
>
> Key: HBASE-15900
> URL: https://issues.apache.org/jira/browse/HBASE-15900
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.3.0
>Reporter: Heng Chen
> Attachments: 0d32a6bab354e6cc170cd59a2d485797.jstack.txt, 
> 0d32a6bab354e6cc170cd59a2d485797.rs.log, 9fe15a52_9fe15a52_save, 
> c91324eb_81194e359707acadee2906ffe36ab130.log, dump.txt
>
>
> It happens on my production cluster when i run MR job.  I save the dump.txt 
> from this RS webUI.
> Many threads stuck here:
> {code}
> Thread 133 (B.defaultRpcServer.handler=94,queue=4,port=16020):
>32   State: WAITING
>31   Blocked count: 477816
>30   Waited count: 535255
>29   Waiting on 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@6447ba67
>28   Stack:
>27 sun.misc.Unsafe.park(Native Method)
>26 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>25 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>24 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>23 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>22 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>21 org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:666)
>20 
> org.apache.hadoop.hbase.regionserver.HRegion.applyFamilyMapToMemstore(HRegion.java:3621)
>19 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3038)
>18 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2793)
>17 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2735)
>16 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>15 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>14 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2029)
>13 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>12 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>11 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>10 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> 9 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> 8 java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal updated HBASE-15775:
--
Attachment: HBASE-15775.0.98.00

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
> Attachments: HBASE-15775.0.98.00
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread Vishal Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal reassigned HBASE-15775:
-

Assignee: Vishal Khandelwal

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15861) Add support for table sets in restore operation

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15861.

  Resolution: Fixed
Hadoop Flags: Reviewed

> Add support for table sets in restore operation
> ---
>
> Key: HBASE-15861
> URL: https://issues.apache.org/jira/browse/HBASE-15861
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-15861-v1.patch, HBASE-15861-v2.patch
>
>
> We support backup operation for table set, but there is no support for 
> restore operation for table set yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15893) Get object

2016-05-31 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309195#comment-15309195
 ] 

Elliott Clark commented on HBASE-15893:
---

bq.Our KV can be something like: 
https://github.com/google/leveldb/blob/master/include/leveldb/slice.h.
https://github.com/google/leveldb/blob/master/include/leveldb/db.h#L83
That's how bytes are passed back. If we want to have something like slice 
that's what IOBuf's are.

bq.The Cell interface is not just for off-heap KV versus on-heap.
The only difference between cell and key value is what accessors are there. 
Cell was a hack to get around removing some that made promises around data 
layout. We have no such inconveniences around having already promised a static 
data layout, hence the cell/kv difference isn't needed. We shouldn't add any 
classes that aren't needed yet.

> Get object
> --
>
> Key: HBASE-15893
> URL: https://issues.apache.org/jira/browse/HBASE-15893
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15893.HBASE-14850.v1.patch
>
>
> Patch for creating Get objects.  Get objects can be passed to the Table 
> implementation to fetch results for a given row. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15927) Remove HMaster.assignRegion()

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309119#comment-15309119
 ] 

Hadoop QA commented on HBASE-15927:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 134m 20s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
46s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 192m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807268/HBASE-15927-v0.patch |
| JIRA Issue | HBASE-15927 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 015f2ef |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2072/testReport/ |
| modules | C: hbase-server 

[jira] [Commented] (HBASE-15911) NPE in AssignmentManager.onRegionTransition after Master restart

2016-05-31 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309091#comment-15309091
 ] 

Pankaj Kumar commented on HBASE-15911:
--

NPE will be thrown when HM is still in initialization phase (not completed 
initialization) and by that time RS send "reportRegionStateTransition" ,
AssignmentManager.onRegionTransition()
{code}
case READY_TO_SPLIT:
  try {
regionStateListener.onRegionSplit(hri);
if (!((HMaster)server).getSplitOrMergeTracker().isSplitOrMergeEnabled(
Admin.MasterSwitchType.SPLIT)) {
  errorMsg = "split switch is off!";
}
  } catch (IOException exp) {
errorMsg = StringUtils.stringifyException(exp);
  }
  break;
{code}

regionStateListener will be initialized during Quota Manager initialization,
HMaster.finishActiveMasterInitialization()
 {code}
status.setStatus("Starting quota manager");
initQuotaManager();
 {code}

> NPE in AssignmentManager.onRegionTransition after Master restart
> 
>
> Key: HBASE-15911
> URL: https://issues.apache.org/jira/browse/HBASE-15911
> Project: HBase
>  Issue Type: Bug
>  Components: master, Region Assignment
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>
> 16/05/27 17:49:18 ERROR ipc.RpcServer: Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:4364)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1421)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2239)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112)
>   at java.lang.Thread.run(Thread.java:745)
> I'm pretty sure I've seen it before and more than once, but never got to dig 
> in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-05-31 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309049#comment-15309049
 ] 

Duo Zhang commented on HBASE-15921:
---

[~stack] I think the Future here is the netty one? It has the same name with 
the JDK one but adds a lot of method to support event-driven.

[~jurmous] Could you please upload the patch to review board? It is too large 
to be reviewed here.

Thanks.

> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15929) There are two classes with name TestRegionServerMetrics

2016-05-31 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309029#comment-15309029
 ] 

Stephen Yuan Jiang commented on HBASE-15929:


The one in hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver is 
there for a long time.

The one in hbase-server/src/test/java/org/apache/hadoop/hbase is recently 
introduced by HBASE-15197 Expose filtered read requests metric to metrics 
framework and Web UI ([~Eungsop Yoo]).  We can either move this file to 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver and rename it 
to TestRegionServerMetrics2.java or merge tests in this new file with the 
existing one.  

> There are two classes with name TestRegionServerMetrics
> ---
>
> Key: HBASE-15929
> URL: https://issues.apache.org/jira/browse/HBASE-15929
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> TestRegionServerMetrics classes should be merged. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309024#comment-15309024
 ] 

Hudson commented on HBASE-15923:


FAILURE: Integrated in HBase-1.3 #722 (See 
[https://builds.apache.org/job/HBase-1.3/722/])
HBASE-15923 Shell rows counter test fails (tedyu: rev 
5824f2236b8b59882113c523d689b734b3ff4996)
* hbase-shell/src/test/ruby/hbase/table_test.rb


> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15923:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309015#comment-15309015
 ] 

Hudson commented on HBASE-15923:


SUCCESS: Integrated in HBase-1.4 #187 (See 
[https://builds.apache.org/job/HBase-1.4/187/])
HBASE-15923 Shell rows counter test fails (tedyu: rev 
0cedd8b344acc54534630a65ce7ecb9de119b2b0)
* hbase-shell/src/test/ruby/hbase/table_test.rb


> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-05-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308995#comment-15308995
 ] 

huaxiang sun commented on HBASE-15908:
--

Thanks [~mantonov]!

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-05-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308986#comment-15308986
 ] 

huaxiang sun commented on HBASE-15908:
--

Never mind, it seems that branch-1.2 is ok.

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-05-31 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308987#comment-15308987
 ] 

Mikhail Antonov commented on HBASE-15908:
-

No, it doesn't. See last  several comments to HBASE-11625.

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15908) Checksum verification is broken due to incorrect passing of ByteBuffers in DataChecksum

2016-05-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308978#comment-15308978
 ] 

huaxiang sun commented on HBASE-15908:
--

HBASE-11625  is in branch-1.2, I do not see HBASE-15908 is in branch-1.2. Does 
it need be committed to branch-1.2 as well? Thanks.

> Checksum verification is broken due to incorrect passing of ByteBuffers in 
> DataChecksum
> ---
>
> Key: HBASE-15908
> URL: https://issues.apache.org/jira/browse/HBASE-15908
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0
>
> Attachments: master.v1.patch
>
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1135)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:151)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.(HFileReaderV3.java:78)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>   ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15893) Get object

2016-05-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308973#comment-15308973
 ] 

Enis Soztutar commented on HBASE-15893:
---

bq. There's no need to have a byte comparable when string already has all that.
Agreed, we can either use strings directly, or typedef BYTE_ARRAY as a string. 
Our KV can be something like: 
https://github.com/google/leveldb/blob/master/include/leveldb/slice.h.  

bq. Don't need cell and key value. There's no off heap. We've made no promises 
about kv's aways being in the same contiguous memory so there's no need to have 
the distinction.
We still need a way to represent the "data model" of a cell (give me the row 
keys from underlying row) etc. However, one direction that we can follow is 
like the flatbuffers approach where every "cell" is a string, and we have KV as 
an accessor-type object, not instantiated per instance. This will work with 
KeyValueCodec, but not with any other codec that we can improve. The Cell 
interface is not just for off-heap KV versus on-heap. The CellCodec can encode 
and re-use the same underlying byte[]s for the rowKey, CF, etc across cells. I 
think we do not want to limit ourselves to only be able to KV representation in 
RPC. So I would opt for a Cell-like interface type where the scan's Result's 
can be encoded more efficiently. For that, we still need a Cell interface, and 
a concrete KV implementation. However, we can make KV a private class, not part 
of the public API.  

Let's create a separate jira for KV + Cell + Bytes as a foundational patch for 
rest of Get / Put, etc. 

> Get object
> --
>
> Key: HBASE-15893
> URL: https://issues.apache.org/jira/browse/HBASE-15893
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15893.HBASE-14850.v1.patch
>
>
> Patch for creating Get objects.  Get objects can be passed to the Table 
> implementation to fetch results for a given row. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15893) Get object

2016-05-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308920#comment-15308920
 ] 

Enis Soztutar commented on HBASE-15893:
---

bq. Motivation for creating hconstants was same as above. If we are doing away 
with hconstants, should we define the constants in the separate classes when 
they are being used ? 
Indeed. The HConstants was an idea that went rouge. The new thinking is to 
declare the constants in the classes where they are used. 
bq. I had created separate branches for Makefile and Get patch. So the Get 
patch consists of the source code necessary for Get objects in adddition to the 
Makefile
You can create stacked patches, but every single issue should only contain 
changes relevant to that particular issue. With git, managing this is pretty 
manageable with {{git branch}}, {{git rebase -i}}, etc. One complication is 
with RB where the patch has to be generated with {{git diff --full-index}}. 

> Get object
> --
>
> Key: HBASE-15893
> URL: https://issues.apache.org/jira/browse/HBASE-15893
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
> Attachments: HBASE-15893.HBASE-14850.v1.patch
>
>
> Patch for creating Get objects.  Get objects can be passed to the Table 
> implementation to fetch results for a given row. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15929) There are two classes with name TestRegionServerMetrics

2016-05-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308885#comment-15308885
 ] 

Enis Soztutar commented on HBASE-15929:
---

{code}
HW10676:hbase$ find . -name "TestRegionServerMetrics.java" 
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
./hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionServerMetrics.java
{code}


> There are two classes with name TestRegionServerMetrics
> ---
>
> Key: HBASE-15929
> URL: https://issues.apache.org/jira/browse/HBASE-15929
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> TestRegionServerMetrics classes should be merged. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308854#comment-15308854
 ] 

Hudson commented on HBASE-15923:


FAILURE: Integrated in HBase-Trunk_matrix #962 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/962/])
HBASE-15923 Shell rows counter test fails (tedyu: rev 
015f2ef6292df52270df8845ccd244a97deb9c98)
* hbase-shell/src/test/ruby/hbase/table_test.rb


> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15907) Missing documentation of create table split options

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308855#comment-15308855
 ] 

Hudson commented on HBASE-15907:


FAILURE: Integrated in HBase-Trunk_matrix #962 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/962/])
HBASE-15907 updates for HBase Shell pre-splitting docs (mstanleyjones: rev 
73ec33856d0ee2ac1e058c6f7e1ccffa4476fbc0)
* src/main/asciidoc/_chapters/performance.adoc
* src/main/asciidoc/_chapters/shell.adoc


> Missing documentation of create table split options
> ---
>
> Key: HBASE-15907
> URL: https://issues.apache.org/jira/browse/HBASE-15907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.1, 1.1.3, 1.1.4, 1.1.5
>Reporter: ronan stokes
>Assignee: ronan stokes
>  Labels: documentation, patch
> Fix For: 2.0.0
>
> Attachments: HBASE-15907-v1.patch, HBASE-15907.patch
>
>
> Earlier versions of the online documentation seemed to have more material 
> around the split options available in the HBase shell - but these seem to 
> have been omitted in the process of various updates. 
> Presplitting has minimal matches and only brings up references to 
> presplitting from code. 
> However there are a number of options relating to creation of splits in 
> tables available in the HBase shell
> For example :
> - create table with set of split literals
> - create table specifying number of splits and a split algorithm
> - create table specifying a split file 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15927) Remove HMaster.assignRegion()

2016-05-31 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15927:

Status: Patch Available  (was: Open)

> Remove HMaster.assignRegion()
> -
>
> Key: HBASE-15927
> URL: https://issues.apache.org/jira/browse/HBASE-15927
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15927-v0.patch
>
>
> another cleanup to have a smaller integration patch for the new AM.
> get rid of the HMaster.assignRegion() which was used only by few tests. 
> and replace that assignRegion()+wait() with a HTU call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15927) Remove HMaster.assignRegion()

2016-05-31 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308838#comment-15308838
 ] 

Stephen Yuan Jiang commented on HBASE-15927:


In Branch-1, MasterRpcServices#unassignRegion() also calls 
HMaster#assignRegion().  The call was removed by HBASE-11732 in master branch.  
Now only tests used HMaster#assignRegion().  So it is good that we move this 
function to a test utility.  

+1.

> Remove HMaster.assignRegion()
> -
>
> Key: HBASE-15927
> URL: https://issues.apache.org/jira/browse/HBASE-15927
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15927-v0.patch
>
>
> another cleanup to have a smaller integration patch for the new AM.
> get rid of the HMaster.assignRegion() which was used only by few tests. 
> and replace that assignRegion()+wait() with a HTU call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15929) There are two classes with name TestRegionServerMetrics

2016-05-31 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308823#comment-15308823
 ] 

Stephen Yuan Jiang commented on HBASE-15929:


I only see one under 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver

> There are two classes with name TestRegionServerMetrics
> ---
>
> Key: HBASE-15929
> URL: https://issues.apache.org/jira/browse/HBASE-15929
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> TestRegionServerMetrics classes should be merged. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308801#comment-15308801
 ] 

Hudson commented on HBASE-15923:


SUCCESS: Integrated in HBase-1.3-IT #688 (See 
[https://builds.apache.org/job/HBase-1.3-IT/688/])
HBASE-15923 Shell rows counter test fails (tedyu: rev 
5824f2236b8b59882113c523d689b734b3ff4996)
* hbase-shell/src/test/ruby/hbase/table_test.rb


> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15900) RS stuck in get lock of HStore

2016-05-31 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308776#comment-15308776
 ] 

Sergey Soldatov commented on HBASE-15900:
-

[~chenheng] Which HDFS version you are using? It may be a HDFS-7005 since hdfs 
client just stuck during the reading. 

> RS stuck in get lock of HStore
> --
>
> Key: HBASE-15900
> URL: https://issues.apache.org/jira/browse/HBASE-15900
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.3.0
>Reporter: Heng Chen
> Attachments: 0d32a6bab354e6cc170cd59a2d485797.jstack.txt, 
> 0d32a6bab354e6cc170cd59a2d485797.rs.log, 9fe15a52_9fe15a52_save, 
> c91324eb_81194e359707acadee2906ffe36ab130.log, dump.txt
>
>
> It happens on my production cluster when i run MR job.  I save the dump.txt 
> from this RS webUI.
> Many threads stuck here:
> {code}
> Thread 133 (B.defaultRpcServer.handler=94,queue=4,port=16020):
>32   State: WAITING
>31   Blocked count: 477816
>30   Waited count: 535255
>29   Waiting on 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@6447ba67
>28   Stack:
>27 sun.misc.Unsafe.park(Native Method)
>26 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>25 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>24 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
>23 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
>22 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
>21 org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:666)
>20 
> org.apache.hadoop.hbase.regionserver.HRegion.applyFamilyMapToMemstore(HRegion.java:3621)
>19 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3038)
>18 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2793)
>17 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2735)
>16 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>15 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>14 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2029)
>13 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>12 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>11 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>10 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> 9 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> 8 java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15926) Makefiles don't have headers

2016-05-31 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15926:
-

 Summary: Makefiles don't have headers
 Key: HBASE-15926
 URL: https://issues.apache.org/jira/browse/HBASE-15926
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark


The makefiles are not correctly licensed anymore after HBASE-15851



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15928) hbase backup delete command does not remove backup root dir from hdfs

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308750#comment-15308750
 ] 

Hadoop QA commented on HBASE-15928:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-15928 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807269/15928.v1.txt |
| JIRA Issue | HBASE-15928 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2071/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> hbase backup delete command does not remove backup root dir from hdfs
> -
>
> Key: HBASE-15928
> URL: https://issues.apache.org/jira/browse/HBASE-15928
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 15928.v1.txt
>
>
> [~romil.choksi] reported the following bug.
> hbase backup delete command successfully deletes backup
> {code}
> hbase@hbase-backup-test-5:~> hbase backup delete backup_1464217940560
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Delete backup failed: no information found for backupID=delete
> 2016-05-26 01:44:40,077 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t1.
> 2016-05-26 01:44:40,081 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t2.
> 2016-05-26 01:44:40,085 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t3.
> Delete backup for backupID=backup_1464217940560 completed.
> {code}
> Listing the backup directory of the backup that was just deleted
> {code}
> hbase@hbase-backup-test-5:~> hdfs dfs -ls /user/hbase
> Found 37 items
> drwx--   - hbase hbase  0 2016-05-25 23:13 /user/hbase/.staging
> drwxr-xr-x   - hbase hbase  0 2016-05-24 19:42 
> /user/hbase/backup_1464047611132
> 
> drwxr-xr-x   - hbase hbase  0 2016-05-25 23:08 
> /user/hbase/backup_1464217727296
> drwxr-xr-x   - hbase hbase  0 2016-05-26 01:44 
> /user/hbase/backup_1464217940560
> {code}
> Backup root dir still exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15929) There are two classes with name TestRegionServerMetrics

2016-05-31 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-15929:
-

 Summary: There are two classes with name TestRegionServerMetrics
 Key: HBASE-15929
 URL: https://issues.apache.org/jira/browse/HBASE-15929
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar


TestRegionServerMetrics classes should be merged. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15928) hbase backup delete command does not remove backup root dir from hdfs

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15928:
---
Attachment: 15928.v1.txt

> hbase backup delete command does not remove backup root dir from hdfs
> -
>
> Key: HBASE-15928
> URL: https://issues.apache.org/jira/browse/HBASE-15928
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 15928.v1.txt
>
>
> [~romil.choksi] reported the following bug.
> hbase backup delete command successfully deletes backup
> {code}
> hbase@hbase-backup-test-5:~> hbase backup delete backup_1464217940560
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Delete backup failed: no information found for backupID=delete
> 2016-05-26 01:44:40,077 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t1.
> 2016-05-26 01:44:40,081 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t2.
> 2016-05-26 01:44:40,085 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t3.
> Delete backup for backupID=backup_1464217940560 completed.
> {code}
> Listing the backup directory of the backup that was just deleted
> {code}
> hbase@hbase-backup-test-5:~> hdfs dfs -ls /user/hbase
> Found 37 items
> drwx--   - hbase hbase  0 2016-05-25 23:13 /user/hbase/.staging
> drwxr-xr-x   - hbase hbase  0 2016-05-24 19:42 
> /user/hbase/backup_1464047611132
> 
> drwxr-xr-x   - hbase hbase  0 2016-05-25 23:08 
> /user/hbase/backup_1464217727296
> drwxr-xr-x   - hbase hbase  0 2016-05-26 01:44 
> /user/hbase/backup_1464217940560
> {code}
> Backup root dir still exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15928) hbase backup delete command does not remove backup root dir from hdfs

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15928:
---
Status: Patch Available  (was: Open)

> hbase backup delete command does not remove backup root dir from hdfs
> -
>
> Key: HBASE-15928
> URL: https://issues.apache.org/jira/browse/HBASE-15928
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 15928.v1.txt
>
>
> [~romil.choksi] reported the following bug.
> hbase backup delete command successfully deletes backup
> {code}
> hbase@hbase-backup-test-5:~> hbase backup delete backup_1464217940560
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Delete backup failed: no information found for backupID=delete
> 2016-05-26 01:44:40,077 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t1.
> 2016-05-26 01:44:40,081 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t2.
> 2016-05-26 01:44:40,085 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t3.
> Delete backup for backupID=backup_1464217940560 completed.
> {code}
> Listing the backup directory of the backup that was just deleted
> {code}
> hbase@hbase-backup-test-5:~> hdfs dfs -ls /user/hbase
> Found 37 items
> drwx--   - hbase hbase  0 2016-05-25 23:13 /user/hbase/.staging
> drwxr-xr-x   - hbase hbase  0 2016-05-24 19:42 
> /user/hbase/backup_1464047611132
> 
> drwxr-xr-x   - hbase hbase  0 2016-05-25 23:08 
> /user/hbase/backup_1464217727296
> drwxr-xr-x   - hbase hbase  0 2016-05-26 01:44 
> /user/hbase/backup_1464217940560
> {code}
> Backup root dir still exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15928) hbase backup delete command does not remove backup root dir from hdfs

2016-05-31 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15928:
--

 Summary: hbase backup delete command does not remove backup root 
dir from hdfs
 Key: HBASE-15928
 URL: https://issues.apache.org/jira/browse/HBASE-15928
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


[~romil.choksi] reported the following bug.

hbase backup delete command successfully deletes backup
{code}
hbase@hbase-backup-test-5:~> hbase backup delete backup_1464217940560
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Delete backup failed: no information found for backupID=delete
2016-05-26 01:44:40,077 INFO  [main] impl.BackupUtil: No data has been found in 
hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t1.
2016-05-26 01:44:40,081 INFO  [main] impl.BackupUtil: No data has been found in 
hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t2.
2016-05-26 01:44:40,085 INFO  [main] impl.BackupUtil: No data has been found in 
hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t3.
Delete backup for backupID=backup_1464217940560 completed.
{code}
Listing the backup directory of the backup that was just deleted
{code}
hbase@hbase-backup-test-5:~> hdfs dfs -ls /user/hbase
Found 37 items
drwx--   - hbase hbase  0 2016-05-25 23:13 /user/hbase/.staging
drwxr-xr-x   - hbase hbase  0 2016-05-24 19:42 
/user/hbase/backup_1464047611132

drwxr-xr-x   - hbase hbase  0 2016-05-25 23:08 
/user/hbase/backup_1464217727296
drwxr-xr-x   - hbase hbase  0 2016-05-26 01:44 
/user/hbase/backup_1464217940560
{code}
Backup root dir still exists



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15927) Remove HMaster.assignRegion()

2016-05-31 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15927:

Attachment: HBASE-15927-v0.patch

> Remove HMaster.assignRegion()
> -
>
> Key: HBASE-15927
> URL: https://issues.apache.org/jira/browse/HBASE-15927
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15927-v0.patch
>
>
> another cleanup to have a smaller integration patch for the new AM.
> get rid of the HMaster.assignRegion() which was used only by few tests. 
> and replace that assignRegion()+wait() with a HTU call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15927) Remove HMaster.assignRegion()

2016-05-31 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-15927:
---

 Summary: Remove HMaster.assignRegion()
 Key: HBASE-15927
 URL: https://issues.apache.org/jira/browse/HBASE-15927
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.0.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0


another cleanup to have a smaller integration patch for the new AM.

get rid of the HMaster.assignRegion() which was used only by few tests. 
and replace that assignRegion()+wait() with a HTU call



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15907) Missing documentation of create table split options

2016-05-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15907:
--
Assignee: ronan stokes  (was: Misty Stanley-Jones)

> Missing documentation of create table split options
> ---
>
> Key: HBASE-15907
> URL: https://issues.apache.org/jira/browse/HBASE-15907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.1, 1.1.3, 1.1.4, 1.1.5
>Reporter: ronan stokes
>Assignee: ronan stokes
>  Labels: documentation, patch
> Fix For: 2.0.0
>
> Attachments: HBASE-15907-v1.patch, HBASE-15907.patch
>
>
> Earlier versions of the online documentation seemed to have more material 
> around the split options available in the HBase shell - but these seem to 
> have been omitted in the process of various updates. 
> Presplitting has minimal matches and only brings up references to 
> presplitting from code. 
> However there are a number of options relating to creation of splits in 
> tables available in the HBase shell
> For example :
> - create table with set of split literals
> - create table specifying number of splits and a split algorithm
> - create table specifying a split file 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15499) Add multiple data type support for increment

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308645#comment-15308645
 ] 

stack commented on HBASE-15499:
---

I am +1 on this. Needs a release note. [~ndimiduk] The number type stuff is 
good by you?

> Add multiple data type support for increment
> 
>
> Key: HBASE-15499
> URL: https://issues.apache.org/jira/browse/HBASE-15499
> Project: HBase
>  Issue Type: New Feature
>  Components: API
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-15499-V2.diff, HBASE-15499-V3.diff, 
> HBASE-15499-V4.diff, HBASE-15499-V5.diff, HBASE-15499-V5.patch, 
> HBASE-15499.diff
>
>
> Currently the increment assumes long with byte-wise serialization. It's 
> useful to  support flexible data type/serializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308629#comment-15308629
 ] 

stack commented on HBASE-15923:
---

Sounds like you should commit.

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308619#comment-15308619
 ] 

Hadoop QA commented on HBASE-15822:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
0s {color} | {color:green} HBASE-14850 passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
2s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 140m 14s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 186m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807213/HBASE-15822.HBASE-14850.patch
 |
| JIRA Issue | HBASE-15822 |
| Optional Tests |  asflicense  shellcheck  shelldocs  cc  unit  hbaseprotoc  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-14850 / 5b10031 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2070/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/2070/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2070/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2070/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2070/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308600#comment-15308600
 ] 

Ted Yu commented on HBASE-15923:


For 1.3 and 1.4, the patch makes TestShell pass.
{code}
Running org.apache.hadoop.hbase.client.TestShell
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 220.196 sec - 
in org.apache.hadoop.hbase.client.TestShell
{code}

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15907) Missing documentation of create table split options

2016-05-31 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-15907:

Attachment: HBASE-15907-v1.patch

> Missing documentation of create table split options
> ---
>
> Key: HBASE-15907
> URL: https://issues.apache.org/jira/browse/HBASE-15907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.1, 1.1.3, 1.1.4, 1.1.5
>Reporter: ronan stokes
>Assignee: Misty Stanley-Jones
>  Labels: documentation, patch
> Fix For: 2.0.0
>
> Attachments: HBASE-15907-v1.patch, HBASE-15907.patch
>
>
> Earlier versions of the online documentation seemed to have more material 
> around the split options available in the HBase shell - but these seem to 
> have been omitted in the process of various updates. 
> Presplitting has minimal matches and only brings up references to 
> presplitting from code. 
> However there are a number of options relating to creation of splits in 
> tables available in the HBase shell
> For example :
> - create table with set of split literals
> - create table specifying number of splits and a split algorithm
> - create table specifying a split file 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15907) Missing documentation of create table split options

2016-05-31 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-15907:

   Resolution: Fixed
 Assignee: Misty Stanley-Jones
 Hadoop Flags: Reviewed
Fix Version/s: (was: 1.3.1)
   (was: 1.1.6)
   (was: 1.2.2)
   (was: 1.4.0)
   (was: 1.3.0)
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks [~rstokes]! I committed this, after fixing a few whitespace errors and 
the commit message. I'll attach what I committed as -v1.patch.

> Missing documentation of create table split options
> ---
>
> Key: HBASE-15907
> URL: https://issues.apache.org/jira/browse/HBASE-15907
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.1, 1.1.3, 1.1.4, 1.1.5
>Reporter: ronan stokes
>Assignee: Misty Stanley-Jones
>  Labels: documentation, patch
> Fix For: 2.0.0
>
> Attachments: HBASE-15907.patch
>
>
> Earlier versions of the online documentation seemed to have more material 
> around the split options available in the HBase shell - but these seem to 
> have been omitted in the process of various updates. 
> Presplitting has minimal matches and only brings up references to 
> presplitting from code. 
> However there are a number of options relating to creation of splits in 
> tables available in the HBase shell
> For example :
> - create table with set of split literals
> - create table specifying number of splits and a split algorithm
> - create table specifying a split file 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-4368) Expose processlist in shell (per regionserver and perhaps by cluster)

2016-05-31 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4368:
--
Fix Version/s: 0.98.20

> Expose processlist in shell (per regionserver and perhaps by cluster)
> -
>
> Key: HBASE-4368
> URL: https://issues.apache.org/jira/browse/HBASE-4368
> Project: HBase
>  Issue Type: Task
>  Components: shell
>Reporter: stack
>Assignee: Talat UYARER
>  Labels: beginner
> Fix For: 2.0.0, 1.4.0, 0.98.20
>
> Attachments: HBASE-4368.patch, HBASE-4368v2-withunittest.patch, 
> HBASE-4368v2.patch, HBASE-4368v3.patch, HBASE-4368v4.patch, 
> HBASE-4368v5-branch-1.patch
>
>
> HBASE-4057 adds processlist and it shows in the RS UI.  This issue is about 
> getting the processlist to show in the shell, like it does in mysql.
> Labelling it noob; this is a pretty substantial issue but it shouldn't be too 
> hard -- it'd mostly be plumbing from RS into the shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15919) Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308503#comment-15308503
 ] 

Hudson commented on HBASE-15919:


FAILURE: Integrated in HBase-Trunk_matrix #961 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/961/])
HBASE-15919 Modify docs to change from @Rule to @ClassRule. Also clarify 
(stack: rev 5ea2f092332515eea48136d7d92f7b8ea72df15b)
* src/main/asciidoc/_chapters/developer.adoc


> Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.
> --
>
> Key: HBASE-15919
> URL: https://issues.apache.org/jira/browse/HBASE-15919
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15919.master.001.patch
>
>
> Our timeout for tests is not clear in the refguide. Our @Rule based 
> CategoryBased timeout is for each individual test when the timeout it seems 
> is for the whole testcase... all the tests that make up the test class. This 
> issue is about cleaning up any abiguity and promoting the new change added 
> over in HBASE-15915 by @appy for a @ClassRule
> Cleanup refguide on what timeout applys to.
> Add section on how to add timeouts to tests.
> See HBASE-15915 tail for some notes on what to add to doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15917) Flaky tests dashboard

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308502#comment-15308502
 ] 

Hudson commented on HBASE-15917:


FAILURE: Integrated in HBase-Trunk_matrix #961 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/961/])
HBASE-15917 Addendum. Fix bug in report-flakies.py where hanging tests (stack: 
rev eb64cd9dd13ba297539c409989c63e800cb378a1)
* dev-support/report-flakies.py


> Flaky tests dashboard
> -
>
> Key: HBASE-15917
> URL: https://issues.apache.org/jira/browse/HBASE-15917
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15917.master.001.addendum.patch, 
> HBASE-15917.master.001.addendum2.patch, HBASE-15917.master.001.patch
>
>
> report-flakies.py now outputs a file dashboard.html.
> Then the dashboard will always be accessible from 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> *(See [this external link|http://hbase.x10host.com/flaky-tests/] for pretty 
> version.)*
> Currently it shows:
> Failing tests
> flakyness %
> count of times a test failed, timed out or hanged
> Links to jenkins' runs grouped by whether the test succeeded, failed, timed 
> out and was hanging in that run.
> Also, once we have set timeouts to tests, they'll not be "hanging" anymore 
> since they'll fail with timeout. Handle this minor difference in 
> findHangingTests.py and show the corresponding stats in dashboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15822:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed thanks

> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308469#comment-15308469
 ] 

Elliott Clark commented on HBASE-15822:
---

Yeah the proto changes are from a rebase. They needed to be copied in (all the 
scripts to start docker should do that automatically). This is the first patch 
I've put up since the rebase.

> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15831) we are running a Spark Job for scanning Hbase table getting Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException

2016-05-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308424#comment-15308424
 ] 

Anoop Sam John commented on HBASE-15831:


Why you need a filter?  Can u use Scan setStart and setStop rows to set the 
range?   The issue is you are trying to fetch 500 rows in one RPC and there is 
filters also..  So at server end, we might be touching many more rows in one 
RPC call..  This eats up time and client side gets RPC time out and it retry.  
You have to either increase the time outs or/and reduce the possible #rows 
touch per scan rpc.

> we are running a Spark Job for scanning Hbase table getting Caused by: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException
> 
>
> Key: HBASE-15831
> URL: https://issues.apache.org/jira/browse/HBASE-15831
> Project: HBase
>  Issue Type: Bug
>Reporter: Neemesh
>
> I am getting following error when I am trying to scan  hbase table in QED 
> environment for a particular collection
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 1629041 number_of_rows: 100 close_scanner: false next_call_seq: 0
> Following is the command to execute the spark job
> spark-submit --master yarn --deploy-mode client --driver-memory 4g --queue 
> root.ecpqedv1patents --class com.thomsonreuters.spark.hbase.HbaseSparkFinal 
> HbaseSparkVenus.jar ecpqedv1patents:NovusDocCopy w_3rd_bonds
> even I tried running this adding following two parameter also --num-executors 
> 200 --executor-cores 4 but even it was throwing same exception.
> I goggled and found if we add following properties we would not be getting 
> above issue, but this property changes also did not help
> .set("hbase.client.pause","1000")
>   .set("hbase.rpc.timeout","9")
>   .set("hbase.client.retries.number","3")
>   .set("zookeeper.recovery.retry","1")
>   .set("hbase.client.operation.timeout","3")
>   .set("hbase.client.scanner.timeout.period","9")
> Please let us know how to resolve this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15721) Optimization in cloning cells into MSLAB

2016-05-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308419#comment-15308419
 ] 

Anoop Sam John commented on HBASE-15721:


Yes.  Increasing the responsibility of the MSLAB from just an allocator to one 
who manages that new area for this Cell.  I felt this is ok.  (?)
We need serialize to Stream and BB.  As of in this patch, there might be byte[] 
coming into picture. Once off heap MSLAB also in place, we will need BB. So 
better now only we can stick with BB.  This Streamable interface helps us to do 
this serialize in one shot.  Or else we will have to parse diff lengths and 
write each of them and each of the cell part (like RK, CF etc) one after the 
other.This was coming in Cell encoding in RPC side (To codec encoder).  
While we are at MSLAB, we were doing the N step way of writing out a cell into 
a new area.


> Optimization in cloning cells into MSLAB
> 
>
> Key: HBASE-15721
> URL: https://issues.apache.org/jira/browse/HBASE-15721
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15721.patch, HBASE-15721_V2.patch
>
>
> Before cells added to memstore CSLM, there is a clone of cell after copying 
> it to MSLAB chunk area.  This is done not in an efficient way.
> {code}
> public static int appendToByteArray(final Cell cell, final byte[] output, 
> final int offset) {
> int pos = offset;
> pos = Bytes.putInt(output, pos, keyLength(cell));
> pos = Bytes.putInt(output, pos, cell.getValueLength());
> pos = appendKeyTo(cell, output, pos);
> pos = CellUtil.copyValueTo(cell, output, pos);
> if ((cell.getTagsLength() > 0)) {
>   pos = Bytes.putAsShort(output, pos, cell.getTagsLength());
>   pos = CellUtil.copyTagTo(cell, output, pos);
> }
> return pos;
>   }
> {code}
> Copied in 9 steps and we end up parsing all lengths.  When the cell 
> implementation is backed by a single byte[] (Like KeyValue) this can be done 
> in single step copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15873) ACL for snapshot restore / clone is not enforced

2016-05-31 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15873:
---
Fix Version/s: 0.98.20

> ACL for snapshot restore / clone is not enforced
> 
>
> Key: HBASE-15873
> URL: https://issues.apache.org/jira/browse/HBASE-15873
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 1.3.0, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15873-branch-1.v1.txt
>
>
> [~romil.choksi] reported that snapshot owner couldn't restore snapshot on 
> hbase 1.1
> We saw the following in master log:
> {code}
> 2016-05-20 00:22:17,186 DEBUG 
> [B.defaultRpcServer.handler=23,queue=2,port=2] ipc.RpcServer: 
> B.defaultRpcServer.handler=23,queue=2,port=2: callId: 15 service: 
> MasterService methodName: RestoreSnapshot size: 70 connection: x.y:56508
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_1' (global, action=ADMIN)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requireGlobalPermission(AccessController.java:536)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:512)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preRestoreSnapshot(AccessController.java:1327)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost$73.call(MasterCoprocessorHost.java:881)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1146)
>   at 
> org.apache.hadoop.hbase.master.MasterCoprocessorHost.preRestoreSnapshot(MasterCoprocessorHost.java:877)
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.restoreSnapshot(SnapshotManager.java:726)
> {code}
> After adding some debug information, it turned out that the (request) 
> SnapshotDescription passed to the method doesn't have owner set.
> This problem doesn't exist in master branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308369#comment-15308369
 ] 

stack commented on HBASE-15923:
---

Is this the cause of current TestShell failures? This stuff below in 1.3 and 
1.4 recent fails?

{code}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Running org.apache.hadoop.hbase.client.TestReplicationShell
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 sec - in 
org.apache.hadoop.hbase.client.TestReplicationShell
Running org.apache.hadoop.hbase.client.TestShell
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 345.342 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.client.TestShell
testRunShellTests(org.apache.hadoop.hbase.client.TestShell)  Time elapsed: 
339.888 sec  <<< ERROR!
org.jruby.embed.EvalFailedException: (RuntimeError) Shell unit tests failed. 
Check output file for details.
at 
org.jruby.embed.internal.EmbedEvalUnitImpl.run(EmbedEvalUnitImpl.java:136)
at 
org.jruby.embed.ScriptingContainer.runUnit(ScriptingContainer.java:1263)
at 
org.jruby.embed.ScriptingContainer.runScriptlet(ScriptingContainer.java:1308)
at 
org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:36)
Caused by: org.jruby.exceptions.RaiseException: (RuntimeError) Shell unit tests 
failed. Check output file for details.
at (Anonymous).(root)(src/test/ruby/tests_runner.rb:84)


Results :

Tests in error:
  TestShell.testRunShellTests:36 » EvalFailed (RuntimeError) Shell unit tests 
fa...

Tests run: 2, Failures: 0, Errors: 1, Skipped: 1
{code}

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308351#comment-15308351
 ] 

Ted Yu commented on HBASE-15698:


[~busbey]:
Shall we get a QA run for patch v3 ?

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, 15698.v2.txt, 
> 15698.v3.txt, HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15698) Increment TimeRange not serialized to server

2016-05-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308349#comment-15308349
 ] 

Anoop Sam John commented on HBASE-15698:


V3 Looks fine.

> Increment TimeRange not serialized to server
> 
>
> Key: HBASE-15698
> URL: https://issues.apache.org/jira/browse/HBASE-15698
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0
>Reporter: James Taylor
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: phoenix
> Fix For: 1.3.0, 1.0.4, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: 15698-suggest.txt, 15698.v1.txt, 15698.v2.txt, 
> 15698.v3.txt, HBASE-15698.1.patch
>
>
> Before HBase-1.2, the Increment TimeRange set on the client was serialized 
> over to the server. As of HBase 1.2, this appears to no longer be true, as my 
> preIncrement coprocessor always gets HConstants.LATEST_TIMESTAMP as the value 
> of increment.getTimeRange().getMax() regardless of what the client has 
> specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-05-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308345#comment-15308345
 ] 

Anoop Sam John commented on HBASE-14921:


Fine with the proposal of breaking it into more than one patch. Ya lets us 
begin with CellArrayMap then.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15915) Set timeouts on hanging tests

2016-05-31 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15915:
-
Release Note: Use @ClassRule to set timeout on test case level (instead of 
@Rule which sets timeout for the test methods). 
CategoryBasedTimeout.forClass(..) determines the timeout value based on 
category annotation (small/medium/large) on the test case. 

> Set timeouts on hanging tests
> -
>
> Key: HBASE-15915
> URL: https://issues.apache.org/jira/browse/HBASE-15915
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15915.master.001.patch, 
> HBASE-15915.master.002.patch
>
>
> - We annotate tests as Small/Medium/Large and define time limits for each, we 
> should use them so tests fail fast and we can run Flaky-Tests job more 
> frequently.
> - It'll be hard to do so for all existing tests (1200+ Test*.java files), but 
> I'd like to do it for at least those known to hang up. (Found by using 
> report-flakies.py on TRUNK)
> - In some places, we have @Rule Timeout but it actually sets timeout for 
> atomic tests.
> Basically we can't go the way where we define time limits on class level 
> (Small/Medium/Large tests) and try to enforce timeouts on atomic tests level. 
> It would be painful (probably why no one has done it yet).
> So i'll be changing these timeouts to
> {noformat}
> @ClassRule
>   public static final TestRule timeout = 
> CategoryBasedTimeout.forClass();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-05-31 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308334#comment-15308334
 ] 

Edward Bortnikov commented on HBASE-14921:
--

[~stack] - no worries, we're on it, very committed to deliver CellChunkMap. 
Just wanted to deliver first things first to keep the stuff manageable. 

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308312#comment-15308312
 ] 

stack commented on HBASE-14921:
---

Sounds good [~anastas] What is the down side? The CellChunkMap is needed if we 
are to do offheap, right. Is there a risk that CellChunkMap might not arrive? 
Could we get stuck in a state where we could not, say, offheap the segment 
pipeline? This is my only concern. Otherwise, your proposal of piecemealing 
this stuff sounds good to me.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15775) Canary launches two AuthUtil Chores

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308304#comment-15308304
 ] 

stack commented on HBASE-15775:
---

Try now [~vishk] I added you as a contributor.

> Canary launches two AuthUtil Chores
> ---
>
> Key: HBASE-15775
> URL: https://issues.apache.org/jira/browse/HBASE-15775
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 0.98.13, 0.98.14, 1.2.0, 0.98.15, 1.2.1, 0.98.16, 
> 0.98.17, 0.98.18, 0.98.16.1, 0.98.19
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner
> Fix For: 1.2.2, 0.98.20
>
>
> Looks like a result of an error in backport done in HBASE-13712. We have a 
> AuthUtil chore both in main() and in run().
> The one in main() should be removed so that the code is consistent with other 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-15174) Client Public API should not have PB objects in 2.0

2016-05-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-15174:
---

Reopening. I like your suggestion [~enis] Lets close this when we have UT that 
can surface PBs added by accident to our public API.

> Client Public API should not have PB objects in 2.0
> ---
>
> Key: HBASE-15174
> URL: https://issues.apache.org/jira/browse/HBASE-15174
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Some more cleanup for the parent jira. 
> We have leaked some PB structs in Admin (and possible other places). 
> We should clean up these API before 2.0.
> Examples include: 
> {code}
>   AdminProtos.GetRegionInfoResponse.CompactionState getCompactionState(final 
> TableName tableName)
> throws IOException;
>
> 
>   void snapshot(final String snapshotName,
>   final TableName tableName,
>   HBaseProtos.SnapshotDescription.Type type) throws IOException, 
> SnapshotCreationException,
>   IllegalArgumentException;
>
>   MasterProtos.SnapshotResponse 
> takeSnapshotAsync(HBaseProtos.SnapshotDescription snapshot)
>   throws IOException, SnapshotCreationException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308265#comment-15308265
 ] 

stack commented on HBASE-15822:
---

+1 on docker change.

I don't think you meant to include the proto changes.

> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308261#comment-15308261
 ] 

stack commented on HBASE-15921:
---

Some notes after a quick scan of the patch (you might want to make use of the 
new ./dev-support/submit-patch.py util going forward... does put to issue and 
to rb at same time).

This has to be this way? (Lots of abstract classes...)

public abstract class AsyncRegionServerCallable extends 
AbstractRegionServerCallable

It probably has to be given you are coming into a convoluted hierarchy that has 
accreted over a long period of time. Was just wondering if could have a 
shallower hierarchy. No issue if can't be done easily... or has to wait till 
later after you've gotten your async client in.

Or, you just moved this existing class out to its own file?

AsyncTable returns Future only? Not CompletableFuture? Consumers won't be able 
to consume AsyncTable in an event-driven way?  We need a callback?

Why we let out EventExecutor? Especially given it a netty class. Can we contain 
the fact that the implementation is netty-based?

In successful future it has a class comment "25  * A Failed Response future"

The replacement of HTable by TableImpl comes later?

Any chance of a note on how the PromiseKeepers work?

[~Apache9] and [~mbertozzi], have a look here lads... you might have a comment 
or so.



> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15822:
--
Status: Patch Available  (was: Open)

> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15914) lease expired exception when do scan with multiple thread

2016-05-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308220#comment-15308220
 ] 

Anoop Sam John commented on HBASE-15914:


This is the Exception u are getting at client end right?  cancelLease() call 
seems to be done with LeaseException catch block..   Can u find the RS log 
around this time and put it here?

> lease expired exception when do scan with multiple thread
> -
>
> Key: HBASE-15914
> URL: https://issues.apache.org/jira/browse/HBASE-15914
> Project: HBase
>  Issue Type: Wish
>  Components: regionserver
>Reporter: wht
>Priority: Minor
>
> hello, i am a freshman with Hbase ,now i do the performace test with hbase in 
> our project. i use scan api to scan a table with filters. because we need the 
> good performance ,so we scan each region with a single thread , total 24 
> regions, so we start 24 scan task to scan . each scan task we get a scanner 
> to scan .
> but now i have a exception below: 
> 2016-05-30 11:18:46,399 | DEBUG | 
> B.defaultRpcServer.handler=46,queue=6,port=21302 | 
> B.defaultRpcServer.handler=46,queue=6,port=21302: callId: 102 service: 
> ClientService methodName: Scan size: 22 connection: XX:59645 | 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
> org.apache.hadoop.hbase.regionserver.LeaseException: lease '11' does not exist
> at 
> org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2551)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2134)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:103)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-30 11:18:46,417 | DEBUG | 
> B.defaultRpcServer.handler=29,queue=9,port=21302 | 
> B.defaultRpcServer.handler=29,queue=9,port=21302: callId: 104 service: 
> ClientService methodName: Scan size: 22 connection: :59645 | 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
> org.apache.hadoop.hbase.regionserver.LeaseException: lease '17' does not exist
> at 
> org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2551)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2134)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:103)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-30 11:18:46,427 | DEBUG | 
> B.defaultRpcServer.handler=72,queue=12,port=21302 | 
> B.defaultRpcServer.handler=72,queue=12,port=21302: callId: 94 service: 
> ClientService methodName: Scan size: 22 connection: :59645 | 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
> org.apache.hadoop.hbase.regionserver.LeaseException: lease '10' does not exist
> at 
> org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
> at 
> org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2551)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2134)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:103)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-30 11:18:46,454 | DEBUG | 
> B.defaultRpcServer.handler=59,queue=19,port=21302 | 
> B.defaultRpcServer.handler=59,queue=19,port=21302: callId: 88 service: 
> ClientService methodName: Scan size: 22 connection: x:59645 | 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:106)
> 

[jira] [Updated] (HBASE-15822) Move to the latest docker base image

2016-05-31 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15822:
--
Attachment: HBASE-15822.HBASE-14850.patch

> Move to the latest docker base image
> 
>
> Key: HBASE-15822
> URL: https://issues.apache.org/jira/browse/HBASE-15822
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15822.HBASE-14850.patch
>
>
> The base docker image got an update to use chef to set everything up. It 
> changes some locations but should be pretty easy to migrate to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15174) Client Public API should not have PB objects in 2.0

2016-05-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308198#comment-15308198
 ] 

Enis Soztutar commented on HBASE-15174:
---

[~ramkrishna.s.vasude...@gmail.com] do you have the UT code to generate the 
list above? We can turn that into a UT to make sure that new APIs will not 
creep in with PB signatures. I can work on a patch if you want. 

> Client Public API should not have PB objects in 2.0
> ---
>
> Key: HBASE-15174
> URL: https://issues.apache.org/jira/browse/HBASE-15174
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Some more cleanup for the parent jira. 
> We have leaked some PB structs in Admin (and possible other places). 
> We should clean up these API before 2.0.
> Examples include: 
> {code}
>   AdminProtos.GetRegionInfoResponse.CompactionState getCompactionState(final 
> TableName tableName)
> throws IOException;
>
> 
>   void snapshot(final String snapshotName,
>   final TableName tableName,
>   HBaseProtos.SnapshotDescription.Type type) throws IOException, 
> SnapshotCreationException,
>   IllegalArgumentException;
>
>   MasterProtos.SnapshotResponse 
> takeSnapshotAsync(HBaseProtos.SnapshotDescription snapshot)
>   throws IOException, SnapshotCreationException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-05-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308165#comment-15308165
 ] 

stack commented on HBASE-15921:
---

I can do perf compare when ready.

> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15917) Flaky tests dashboard

2016-05-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15917.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0

Re-resolving after appending second addendum.

I like the [~dimaspivak] comments [~appy] Want to address in a third addendum? 
I can push no problem.

> Flaky tests dashboard
> -
>
> Key: HBASE-15917
> URL: https://issues.apache.org/jira/browse/HBASE-15917
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15917.master.001.addendum.patch, 
> HBASE-15917.master.001.addendum2.patch, HBASE-15917.master.001.patch
>
>
> report-flakies.py now outputs a file dashboard.html.
> Then the dashboard will always be accessible from 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> *(See [this external link|http://hbase.x10host.com/flaky-tests/] for pretty 
> version.)*
> Currently it shows:
> Failing tests
> flakyness %
> count of times a test failed, timed out or hanged
> Links to jenkins' runs grouped by whether the test succeeded, failed, timed 
> out and was hanging in that run.
> Also, once we have set timeouts to tests, they'll not be "hanging" anymore 
> since they'll fail with timeout. Handle this minor difference in 
> findHangingTests.py and show the corresponding stats in dashboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15919) Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.

2016-05-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15919:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Added some tweaks and pushed. Thanks [~appy]

> Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.
> --
>
> Key: HBASE-15919
> URL: https://issues.apache.org/jira/browse/HBASE-15919
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15919.master.001.patch
>
>
> Our timeout for tests is not clear in the refguide. Our @Rule based 
> CategoryBased timeout is for each individual test when the timeout it seems 
> is for the whole testcase... all the tests that make up the test class. This 
> issue is about cleaning up any abiguity and promoting the new change added 
> over in HBASE-15915 by @appy for a @ClassRule
> Cleanup refguide on what timeout applys to.
> Add section on how to add timeouts to tests.
> See HBASE-15915 tail for some notes on what to add to doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15917) Flaky tests dashboard

2016-05-31 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308129#comment-15308129
 ] 

Dima Spivak edited comment on HBASE-15917 at 5/31/16 5:07 PM:
--

Some mostly nit-y Python comments:
* Instead of documenting functions with leading {{#}}, use triple double-quotes 
for docstrings (see PEP 257).
* {{get_bad_tests}} might work better returning a tuple than a list since you 
probably want this immutable.
* Replace {{print}} instances with logging objects when applicable (e.g. for 
script status updates).
* Move the HTML template to its own file instead of putting it inline. We 
should probably do the same with the stylesheet, too.

Really cool idea here, [~appy]. Good work.


was (Author: dimaspivak):
Some mostly nit-y Python comments:
* Instead of documenting functions with leading {{ \# }}s, use triple 
double-quotes for docstrings (see PEP 257).
* {{get_bad_tests}} might work better returning a tuple than a list since you 
probably want this immutable.
* Replace {{print}}s with logging objects when applicable (e.g. for script 
status updates).
* Move the HTML template to its own file instead of putting it inline. We 
should probably do the same with the stylesheet, too.

Really cool idea here, [~appy]. Good work.

> Flaky tests dashboard
> -
>
> Key: HBASE-15917
> URL: https://issues.apache.org/jira/browse/HBASE-15917
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-15917.master.001.addendum.patch, 
> HBASE-15917.master.001.addendum2.patch, HBASE-15917.master.001.patch
>
>
> report-flakies.py now outputs a file dashboard.html.
> Then the dashboard will always be accessible from 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> *(See [this external link|http://hbase.x10host.com/flaky-tests/] for pretty 
> version.)*
> Currently it shows:
> Failing tests
> flakyness %
> count of times a test failed, timed out or hanged
> Links to jenkins' runs grouped by whether the test succeeded, failed, timed 
> out and was hanging in that run.
> Also, once we have set timeouts to tests, they'll not be "hanging" anymore 
> since they'll fail with timeout. Handle this minor difference in 
> findHangingTests.py and show the corresponding stats in dashboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15917) Flaky tests dashboard

2016-05-31 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308129#comment-15308129
 ] 

Dima Spivak commented on HBASE-15917:
-

Some mostly nit-y Python comments:
* Instead of documenting functions with leading {{ \# }}s, use triple 
double-quotes for docstrings (see PEP 257).
* {{get_bad_tests}} might work better returning a tuple than a list since you 
probably want this immutable.
* Replace {{print}}s with logging objects when applicable (e.g. for script 
status updates).
* Move the HTML template to its own file instead of putting it inline. We 
should probably do the same with the stylesheet, too.

Really cool idea here, [~appy]. Good work.

> Flaky tests dashboard
> -
>
> Key: HBASE-15917
> URL: https://issues.apache.org/jira/browse/HBASE-15917
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-15917.master.001.addendum.patch, 
> HBASE-15917.master.001.addendum2.patch, HBASE-15917.master.001.patch
>
>
> report-flakies.py now outputs a file dashboard.html.
> Then the dashboard will always be accessible from 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> *(See [this external link|http://hbase.x10host.com/flaky-tests/] for pretty 
> version.)*
> Currently it shows:
> Failing tests
> flakyness %
> count of times a test failed, timed out or hanged
> Links to jenkins' runs grouped by whether the test succeeded, failed, timed 
> out and was hanging in that run.
> Also, once we have set timeouts to tests, they'll not be "hanging" anymore 
> since they'll fail with timeout. Handle this minor difference in 
> findHangingTests.py and show the corresponding stats in dashboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15925) compat-module maven variable not evaluated

2016-05-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308017#comment-15308017
 ] 

Nick Dimiduk commented on HBASE-15925:
--

FYI [~mantonov], [~busbey]

> compat-module maven variable not evaluated
> --
>
> Key: HBASE-15925
> URL: https://issues.apache.org/jira/browse/HBASE-15925
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.2.1, 1.0.3, 1.1.5
>Reporter: Nick Dimiduk
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>
>
> Looks like we've regressed on HBASE-8488. Have a look at the dependency 
> artifacts list on 
> http://mvnrepository.com/artifact/org.apache.hbase/hbase-testing-util/1.2.1. 
> Notice the direct dependency's artifactId is {{$\{compat.module\}}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8488) HBase transitive dependencies not being pulled in when building apps like Flume which depend on HBase

2016-05-31 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308015#comment-15308015
 ] 

Nick Dimiduk commented on HBASE-8488:
-

Thanks for bringing this up [~dportabella]. We don't re-open closed issues, so 
I've filed a new blocker vs. all release branches: HBASE-15925. We can continue 
discussion over there.

> HBase transitive dependencies not being pulled in when building apps like 
> Flume which depend on HBase
> -
>
> Key: HBASE-8488
> URL: https://issues.apache.org/jira/browse/HBASE-8488
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.0
>Reporter: Roshan Naik
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.0, 0.95.2
>
> Attachments: client.tgz
>
>
> Here is a snippet of the errors seen when building against Hbase
> {code}
> [WARNING] Invalid POM for org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT, 
> transitive dependencies (if any) will not be available, enable debug logging 
> for more details: Some problems were encountered while processing the POMs:
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 982, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 987, column 21
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15925) compat-module maven variable not evaluated

2016-05-31 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-15925:


 Summary: compat-module maven variable not evaluated
 Key: HBASE-15925
 URL: https://issues.apache.org/jira/browse/HBASE-15925
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 1.1.5, 1.0.3, 1.2.1, 1.2.0, 1.1.0, 1.0.0
Reporter: Nick Dimiduk
Priority: Blocker
 Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6


Looks like we've regressed on HBASE-8488. Have a look at the dependency 
artifacts list on 
http://mvnrepository.com/artifact/org.apache.hbase/hbase-testing-util/1.2.1. 
Notice the direct dependency's artifactId is {{$\{compat.module\}}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8488) HBase transitive dependencies not being pulled in when building apps like Flume which depend on HBase

2016-05-31 Thread David Portabella (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307889#comment-15307889
 ] 

David Portabella commented on HBASE-8488:
-

This issue is not fixed yet.
All versions from 0.99.0 to the latest 1.2.1 of hbase-testing-util still depend 
on the unresolved dependency org.apache.hbase ${compat.module}.

All versions from 0.96.0-hadoop1 to 0.98.19-hadoop2 work fine.

See here:
http://mvnrepository.com/artifact/org.apache.hbase/hbase-testing-util/1.2.1


> HBase transitive dependencies not being pulled in when building apps like 
> Flume which depend on HBase
> -
>
> Key: HBASE-8488
> URL: https://issues.apache.org/jira/browse/HBASE-8488
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.0
>Reporter: Roshan Naik
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.0, 0.95.2
>
> Attachments: client.tgz
>
>
> Here is a snippet of the errors seen when building against Hbase
> {code}
> [WARNING] Invalid POM for org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT, 
> transitive dependencies (if any) will not be available, enable debug logging 
> for more details: Some problems were encountered while processing the POMs:
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 982, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 987, column 21
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307785#comment-15307785
 ] 

Hadoop QA commented on HBASE-15923:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 3s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 3s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 2s {color} | 
{color:red} hbase-shell in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
|   | hadoop.hbase.client.rsgroup.TestShellRSGroups |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807149/15923-branch-1.v1.txt 
|
| JIRA Issue | HBASE-15923 |
| Optional Tests |  asflicense  unit  rubocop  ruby_lint  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 75c2360 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2069/artifact/patchprocess/patch-unit-hbase-shell.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/2069/artifact/patchprocess/patch-unit-hbase-shell.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2069/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2069/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh

2016-05-31 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  updated HBASE-15924:

Description: 
As part of HBASE-5939, the autorestart for hbase services has been added to 
deal with scenarios where hbase services (master/regionserver/master-backup) 
gets killed or goes down leading to unplanned outages. The changes were made to 
hbase-daemon.sh to support autorestart option. 

However, the autorestart implementation doesn't work in standalone mode and 
other than that have few gaps with the implementation as per release notes of 
HBASE-5939. Here is an attempt to re-design and fix the functionality 
considered all possible usecases with hbase service operations.

Release Notes of HBASE-5939:
--
When launched with autorestart, HBase processes will automatically restart if 
they are not properly terminated, either by a "stop" command or by a cluster 
stop. To ensure that it does not overload the system when the server itself is 
corrupted and the process cannot be restarted, the server sleeps for 5 minutes 
before restarting if it was already started 5 minutes ago previously. To use 
it, launch the process with "bin/start-hbase autorestart". This option is not 
fully compatible with the existing "restart" command: if you ask for a restart 
on a server launched with autorestart, the server will restart but the next 
server instance won't be automatically restarted.




  was:
As part of HBASE-5939, the autorestart of hbase services has been added to deal 
with scenarios where hbase services (master/regionserver/master-backup) gets 
killed or goes down leading to unplanned outages. The changes were made to 
hbase-daemon.sh with the help of autorestart option. 

However, the autorestart implementation doesn't work in standalone mode and 
have few gaps with the implementation as per release notes of HBASE-5939. Here 
is an attempt to re-design and fix the functionality considered all possible 
usecases with hbase service operations.

Release Notes of HBASE-5939:
--
When launched with autorestart, HBase processes will automatically restart if 
they are not properly terminated, either by a "stop" command or by a cluster 
stop. To ensure that it does not overload the system when the server itself is 
corrupted and the process cannot be restarted, the server sleeps for 5 minutes 
before restarting if it was already started 5 minutes ago previously. To use 
it, launch the process with "bin/start-hbase autorestart". This option is not 
fully compatible with the existing "restart" command: if you ask for a restart 
on a server launched with autorestart, the server will restart but the next 
server instance won't be automatically restarted.





> Enhance hbase services autorestart capability to hbase-daemon.sh 
> -
>
> Key: HBASE-15924
> URL: https://issues.apache.org/jira/browse/HBASE-15924
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.19
>Reporter: Loknath Priyatham Teja Singamsetty 
> Fix For: 0.98.19
>
>
> As part of HBASE-5939, the autorestart for hbase services has been added to 
> deal with scenarios where hbase services (master/regionserver/master-backup) 
> gets killed or goes down leading to unplanned outages. The changes were made 
> to hbase-daemon.sh to support autorestart option. 
> However, the autorestart implementation doesn't work in standalone mode and 
> other than that have few gaps with the implementation as per release notes of 
> HBASE-5939. Here is an attempt to re-design and fix the functionality 
> considered all possible usecases with hbase service operations.
> Release Notes of HBASE-5939:
> --
> When launched with autorestart, HBase processes will automatically restart if 
> they are not properly terminated, either by a "stop" command or by a cluster 
> stop. To ensure that it does not overload the system when the server itself 
> is corrupted and the process cannot be restarted, the server sleeps for 5 
> minutes before restarting if it was already started 5 minutes ago previously. 
> To use it, launch the process with "bin/start-hbase autorestart". This option 
> is not fully compatible with the existing "restart" command: if you ask for a 
> restart on a server launched with autorestart, the server will restart but 
> the next server instance won't be automatically restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh

2016-05-31 Thread Loknath Priyatham Teja Singamsetty (JIRA)
Loknath Priyatham Teja Singamsetty  created HBASE-15924:
---

 Summary: Enhance hbase services autorestart capability to 
hbase-daemon.sh 
 Key: HBASE-15924
 URL: https://issues.apache.org/jira/browse/HBASE-15924
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.19
Reporter: Loknath Priyatham Teja Singamsetty 
 Fix For: 0.98.19


As part of HBASE-5939, the autorestart of hbase services has been added to deal 
with scenarios where hbase services (master/regionserver/master-backup) gets 
killed or goes down leading to unplanned outages. The changes were made to 
hbase-daemon.sh with the help of autorestart option. 

However, the autorestart implementation doesn't work in standalone mode and 
have few gaps with the implementation as per release notes of HBASE-5939. Here 
is an attempt to re-design and fix the functionality considered all possible 
usecases with hbase service operations.

Release Notes of HBASE-5939:
--
When launched with autorestart, HBase processes will automatically restart if 
they are not properly terminated, either by a "stop" command or by a cluster 
stop. To ensure that it does not overload the system when the server itself is 
corrupted and the process cannot be restarted, the server sleeps for 5 minutes 
before restarting if it was already started 5 minutes ago previously. To use 
it, launch the process with "bin/start-hbase autorestart". This option is not 
fully compatible with the existing "restart" command: if you ask for a restart 
on a server launched with autorestart, the server will restart but the next 
server instance won't be automatically restarted.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15920) Backport submit-patch.py to branch-1 and earlier branches.

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307750#comment-15307750
 ] 

Hudson commented on HBASE-15920:


FAILURE: Integrated in HBase-1.4 #186 (See 
[https://builds.apache.org/job/HBase-1.4/186/])
HBASE-15920 Backport submit-patch.py to branch-1 and earlier branches. (stack: 
rev 32258c2b3a535f94cc8348020780d06342528fa7)
* dev-support/python-requirements.txt
* dev-support/submit-patch.py


> Backport submit-patch.py to branch-1 and earlier branches.
> --
>
> Key: HBASE-15920
> URL: https://issues.apache.org/jira/browse/HBASE-15920
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15920.branch-1.001.patch
>
>
> This is combination of HBASE-15892 and HBASE-15909 and the fact that 
> python-requirements.txt didn't exist in old branches because of which the 
> patches weren't directly applicable. Was easier to make a single patch 
> consisting everything, should be easier to backport too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15923:
---
Status: Patch Available  (was: Open)

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15923:
---
Attachment: 15923-branch-1.v1.txt

> Shell rows counter test fails
> -
>
> Key: HBASE-15923
> URL: https://issues.apache.org/jira/browse/HBASE-15923
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15923-branch-1.v1.txt
>
>
> HBASE-10358 changed the return value from _scan_internal, leading to the 
> assertion failure for "scan with a block should yield rows and return rows 
> counter" :
> {code}
>res = @test_table._scan_internal { |row, cells| rows[row] = cells }
>assert_equal(rows.keys.size, res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15923) Shell rows counter test fails

2016-05-31 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15923:
--

 Summary: Shell rows counter test fails
 Key: HBASE-15923
 URL: https://issues.apache.org/jira/browse/HBASE-15923
 Project: HBase
  Issue Type: Test
Affects Versions: 1.3.0
Reporter: Ted Yu
Assignee: Ted Yu


HBASE-10358 changed the return value from _scan_internal, leading to the 
assertion failure for "scan with a block should yield rows and return rows 
counter" :
{code}
   res = @test_table._scan_internal { |row, cells| rows[row] = cells }
   assert_equal(rows.keys.size, res)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15922) Fix waitForMaximumCurrentTasks logic in AsyncProcess

2016-05-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li resolved HBASE-15922.
---
Resolution: Duplicate

Find the issue duplicates HBASE-15811 after updating my local git repository, 
sorry for the spam...

> Fix waitForMaximumCurrentTasks logic in AsyncProcess
> 
>
> Key: HBASE-15922
> URL: https://issues.apache.org/jira/browse/HBASE-15922
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.1, 1.1.4
>Reporter: Yu Li
>Assignee: Yu Li
>
> In current implementation of AsyncProcess#waitForMaximumCurrentTasks, we have 
> below codes:
> {code}
> while ((currentInProgress = this.tasksInProgress.get()) > max) {
>   ...
>   try {
> synchronized (this.tasksInProgress) {
>   if (tasksInProgress.get() != oldInProgress) break;
>   this.tasksInProgress.wait(100);
> }
>   } catch (InterruptedException e) {
> throw new InterruptedIOException("#" + id + ", interrupted." +
> " currentNumberOfTask=" + currentInProgress);
>   }
> }
> {code}
> Which will cause end of while loop if there's any task done inside one loop 
> making {{taskInProgress.get()}} no longer equals to {{oldInProgress}}
> This is a regression issue caused by HBASE-11403 and only exists in 
> branch-1/master branch, we could easily see the difference comparing to 
> latest 0.98 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13960) HConnection stuck with UnknownHostException

2016-05-31 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307699#comment-15307699
 ] 

Yu Li commented on HBASE-13960:
---

This issue is resolved by HBASE-15856

> HConnection stuck with UnknownHostException 
> 
>
> Key: HBASE-13960
> URL: https://issues.apache.org/jira/browse/HBASE-13960
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.8
>Reporter: Kurt Young
>Assignee: Yu Li
> Attachments: HBASE-13960-0.98-v1.patch, HBASE-13960-update.patch, 
> HBASE-13960-update.v2.patch, HBASE-13960-v2.patch
>
>
> when put/get from hbase, if we meet a temporary dns failure causes resolve 
> RS's host, the error will never recovered. put/get will failed with 
> UnknownHostException forever. 
> I checked the code, and the reason maybe:
> 1. when RegionServerCallable or MultiServerCallable prepare(), it gets a  
> ClientService.BlockingInterface stub from Hconnection
> 2. In HConnectionImplementation::getClient, it caches the stub with a 
> BlockingRpcChannelImplementation
> 3. In BlockingRpcChannelImplementation(), 
>  this.isa = new InetSocketAddress(sn.getHostname(), sn.getPort()); If we 
> meet a  temporary dns failure then the "address" in isa will be null.
> 4. then we launch the real rpc call, the following stack is:
> Caused by: java.net.UnknownHostException: unknown host: xxx.host2
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.(RpcClient.java:385)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1523)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)
> Besides, i noticed there is a protection in RpcClient:
> if (remoteId.getAddress().isUnresolved()) {
> throw new UnknownHostException("unknown host: " + 
> remoteId.getAddress().getHostName());
>   }
> shouldn't we do something when this situation occurred? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13960) HConnection stuck with UnknownHostException

2016-05-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-13960:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> HConnection stuck with UnknownHostException 
> 
>
> Key: HBASE-13960
> URL: https://issues.apache.org/jira/browse/HBASE-13960
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.8
>Reporter: Kurt Young
>Assignee: Yu Li
> Attachments: HBASE-13960-0.98-v1.patch, HBASE-13960-update.patch, 
> HBASE-13960-update.v2.patch, HBASE-13960-v2.patch
>
>
> when put/get from hbase, if we meet a temporary dns failure causes resolve 
> RS's host, the error will never recovered. put/get will failed with 
> UnknownHostException forever. 
> I checked the code, and the reason maybe:
> 1. when RegionServerCallable or MultiServerCallable prepare(), it gets a  
> ClientService.BlockingInterface stub from Hconnection
> 2. In HConnectionImplementation::getClient, it caches the stub with a 
> BlockingRpcChannelImplementation
> 3. In BlockingRpcChannelImplementation(), 
>  this.isa = new InetSocketAddress(sn.getHostname(), sn.getPort()); If we 
> meet a  temporary dns failure then the "address" in isa will be null.
> 4. then we launch the real rpc call, the following stack is:
> Caused by: java.net.UnknownHostException: unknown host: xxx.host2
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.(RpcClient.java:385)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1523)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)
> Besides, i noticed there is a protection in RpcClient:
> if (remoteId.getAddress().isUnresolved()) {
> throw new UnknownHostException("unknown host: " + 
> remoteId.getAddress().getHostName());
>   }
> shouldn't we do something when this situation occurred? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15856) Cached Connection instances can wind up with addresses never resolved

2016-05-31 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307695#comment-15307695
 ] 

Yu Li commented on HBASE-15856:
---

This problem is the same with HBASE-13960, let me mark 13960 duplicated of this 
one. Thanks for fix and get it in [~ghelmling]!

> Cached Connection instances can wind up with addresses never resolved
> -
>
> Key: HBASE-15856
> URL: https://issues.apache.org/jira/browse/HBASE-15856
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.2, 0.98.20, 1.1.6
>
> Attachments: HBASE-15856.001.patch, HBASE-15856.002.patch, 
> HBASE-15856.003.patch, HBASE-15856.addendum.patch
>
>
> During periods where DNS is not working properly, we can wind up caching 
> connections to master or regionservers where the initial hostname resolution 
> and the resolution is never re-attempted.  This means that clients will 
> forever get UnknownHostException for any calls.
> When constructing a BlockingRpcChannelImplementation, we instantiate the 
> InetSocketAddress to use for the connection.  This instance is then used in 
> the rpc client connection, where we check isUnresolved() and throw an 
> UnknownHostException if that returns true.  However, at this point the rpc 
> channel is already cached in the HConnectionImplementation map of stubs.  So 
> at this point it will never be resolved.
> Setting the config for hbase.resolve.hostnames.on.failure masks this issue, 
> since the stub key used is modified to contain the address.  However, even in 
> that case, if DNS fails, an rpc channel instance with unresolved ISA will 
> still be cached in the stubs under the hostname only key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15919) Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307683#comment-15307683
 ] 

Hadoop QA commented on HBASE-15919:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 22s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
|   | hadoop.hbase.security.token.TestGenerateDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807108/HBASE-15919.master.001.patch
 |
| JIRA Issue | HBASE-15919 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 75c2360 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2067/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/2067/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2067/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2067/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.
> --
>
> Key: HBASE-15919
> URL: https://issues.apache.org/jira/browse/HBASE-15919
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-15919.master.001.patch
>
>
> Our timeout for tests is not clear in the refguide. Our @Rule based 
> CategoryBased timeout is for each individual test when the timeout it seems 
> is for the whole testcase... all the tests that make up the test class. This 
> issue is about cleaning up any abiguity and promoting the new change added 
> over in HBASE-15915 by @appy for a @ClassRule
> Cleanup refguide on what timeout applys to.
> Add section on how to add 

[jira] [Updated] (HBASE-15922) Fix waitForMaximumCurrentTasks logic in AsyncProcess

2016-05-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15922:
--
 Assignee: Yu Li
Affects Version/s: 2.0.0
   1.2.1
   1.1.4

> Fix waitForMaximumCurrentTasks logic in AsyncProcess
> 
>
> Key: HBASE-15922
> URL: https://issues.apache.org/jira/browse/HBASE-15922
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.2.1, 1.1.4
>Reporter: Yu Li
>Assignee: Yu Li
>
> In current implementation of AsyncProcess#waitForMaximumCurrentTasks, we have 
> below codes:
> {code}
> while ((currentInProgress = this.tasksInProgress.get()) > max) {
>   ...
>   try {
> synchronized (this.tasksInProgress) {
>   if (tasksInProgress.get() != oldInProgress) break;
>   this.tasksInProgress.wait(100);
> }
>   } catch (InterruptedException e) {
> throw new InterruptedIOException("#" + id + ", interrupted." +
> " currentNumberOfTask=" + currentInProgress);
>   }
> }
> {code}
> Which will cause end of while loop if there's any task done inside one loop 
> making {{taskInProgress.get()}} no longer equals to {{oldInProgress}}
> This is a regression issue caused by HBASE-11403 and only exists in 
> branch-1/master branch, we could easily see the difference comparing to 
> latest 0.98 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15922) Fix waitForMaximumCurrentTasks logic in AsyncProcess

2016-05-31 Thread Yu Li (JIRA)
Yu Li created HBASE-15922:
-

 Summary: Fix waitForMaximumCurrentTasks logic in AsyncProcess
 Key: HBASE-15922
 URL: https://issues.apache.org/jira/browse/HBASE-15922
 Project: HBase
  Issue Type: Bug
Reporter: Yu Li


In current implementation of AsyncProcess#waitForMaximumCurrentTasks, we have 
below codes:
{code}
while ((currentInProgress = this.tasksInProgress.get()) > max) {
  ...
  try {
synchronized (this.tasksInProgress) {
  if (tasksInProgress.get() != oldInProgress) break;
  this.tasksInProgress.wait(100);
}
  } catch (InterruptedException e) {
throw new InterruptedIOException("#" + id + ", interrupted." +
" currentNumberOfTask=" + currentInProgress);
  }
}
{code}
Which will cause end of while loop if there's any task done inside one loop 
making {{taskInProgress.get()}} no longer equals to {{oldInProgress}}

This is a regression issue caused by HBASE-11403 and only exists in 
branch-1/master branch, we could easily see the difference comparing to latest 
0.98 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15831) we are running a Spark Job for scanning Hbase table getting Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException

2016-05-31 Thread Neemesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307669#comment-15307669
 ] 

Neemesh commented on HBASE-15831:
-

there are nearly 9 million record, i am using Filter Start and End row filter 
to reduce the scan row.
Scanner cache is set as 500

> we are running a Spark Job for scanning Hbase table getting Caused by: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException
> 
>
> Key: HBASE-15831
> URL: https://issues.apache.org/jira/browse/HBASE-15831
> Project: HBase
>  Issue Type: Bug
>Reporter: Neemesh
>
> I am getting following error when I am trying to scan  hbase table in QED 
> environment for a particular collection
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 1629041 number_of_rows: 100 close_scanner: false next_call_seq: 0
> Following is the command to execute the spark job
> spark-submit --master yarn --deploy-mode client --driver-memory 4g --queue 
> root.ecpqedv1patents --class com.thomsonreuters.spark.hbase.HbaseSparkFinal 
> HbaseSparkVenus.jar ecpqedv1patents:NovusDocCopy w_3rd_bonds
> even I tried running this adding following two parameter also --num-executors 
> 200 --executor-cores 4 but even it was throwing same exception.
> I goggled and found if we add following properties we would not be getting 
> above issue, but this property changes also did not help
> .set("hbase.client.pause","1000")
>   .set("hbase.rpc.timeout","9")
>   .set("hbase.client.retries.number","3")
>   .set("zookeeper.recovery.retry","1")
>   .set("hbase.client.operation.timeout","3")
>   .set("hbase.client.scanner.timeout.period","9")
> Please let us know how to resolve this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307641#comment-15307641
 ] 

Hadoop QA commented on HBASE-15921:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
30s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 18s 
{color} | {color:red} hbase-client-jdk1.8.0 with JDK v1.8.0 generated 7 new + 
13 unchanged - 0 fixed = 20 total (was 13) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 17s 
{color} | {color:red} hbase-client-jdk1.7.0_79 with JDK v1.7.0_79 generated 7 
new + 13 unchanged - 0 fixed = 20 total (was 13) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 6s {color} | 
{color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 29s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (HBASE-15920) Backport submit-patch.py to branch-1 and earlier branches.

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307576#comment-15307576
 ] 

Hudson commented on HBASE-15920:


SUCCESS: Integrated in HBase-1.2 #640 (See 
[https://builds.apache.org/job/HBase-1.2/640/])
HBASE-15920 Backport submit-patch.py to branch-1 and earlier branches. (stack: 
rev b41bf10edff18eae0402f40a1eea46f05f9b4831)
* dev-support/python-requirements.txt
* dev-support/submit-patch.py


> Backport submit-patch.py to branch-1 and earlier branches.
> --
>
> Key: HBASE-15920
> URL: https://issues.apache.org/jira/browse/HBASE-15920
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15920.branch-1.001.patch
>
>
> This is combination of HBASE-15892 and HBASE-15909 and the fact that 
> python-requirements.txt didn't exist in old branches because of which the 
> patches weren't directly applicable. Was easier to make a single patch 
> consisting everything, should be easier to backport too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-05-31 Thread Jurriaan Mous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurriaan Mous updated HBASE-15921:
--
Attachment: HBASE-15921.v1.patch

I was a bit too quick to apply patch of already prepared work to master. Fixed 
errors.

> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15920) Backport submit-patch.py to branch-1 and earlier branches.

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307547#comment-15307547
 ] 

Hudson commented on HBASE-15920:


FAILURE: Integrated in HBase-1.3 #721 (See 
[https://builds.apache.org/job/HBase-1.3/721/])
HBASE-15920 Backport submit-patch.py to branch-1 and earlier branches. (stack: 
rev 536a8e836add62c3ee3f85b97e22d84d69008eea)
* dev-support/python-requirements.txt
* dev-support/submit-patch.py


> Backport submit-patch.py to branch-1 and earlier branches.
> --
>
> Key: HBASE-15920
> URL: https://issues.apache.org/jira/browse/HBASE-15920
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15920.branch-1.001.patch
>
>
> This is combination of HBASE-15892 and HBASE-15909 and the fact that 
> python-requirements.txt didn't exist in old branches because of which the 
> patches weren't directly applicable. Was easier to make a single patch 
> consisting everything, should be easier to backport too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15919) Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.

2016-05-31 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15919:
-
Status: Patch Available  (was: Open)

> Document @Rule vs @ClassRule. Also clarify timeout limits are on TestCase.
> --
>
> Key: HBASE-15919
> URL: https://issues.apache.org/jira/browse/HBASE-15919
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-15919.master.001.patch
>
>
> Our timeout for tests is not clear in the refguide. Our @Rule based 
> CategoryBased timeout is for each individual test when the timeout it seems 
> is for the whole testcase... all the tests that make up the test class. This 
> issue is about cleaning up any abiguity and promoting the new change added 
> over in HBASE-15915 by @appy for a @ClassRule
> Cleanup refguide on what timeout applys to.
> Add section on how to add timeouts to tests.
> See HBASE-15915 tail for some notes on what to add to doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >