[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522149#comment-15522149
 ] 

Andrew Purtell commented on HBASE-16703:


Go for it Ram, thank you 

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from 
> BufferedDataBlockEncoder$SeekerState., where we allocate two byte 
> arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522125#comment-15522125
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-16703 at 9/26/16 5:43 AM:
-

[~apurtell]
I was working on this area long time back regarding the byte[] that was getting 
recreated if the size grows above INITIAL_BUFFER_SIZE. But that was to see if 
we can avoid the recreation. But I think we can atleast avoid the allocation 
pressure of the buffer because per SeekerState we are going to keep filling 
this fixed byte array as per the cell that is retrieved.
Let me know if I can take this up as this is unassigned. If you are planning to 
work on it, pls feel free to do so.


was (Author: ram_krish):
[~apurtell]
I was working on this area long time back regarding the byte[] that was getting 
recreated if the size grows above INITIAL_BUFFER_SIZE. But that was to see if 
we can avoid the recreation. But I think we can atleast avoid the recreation of 
the buffer because per SeekerState we are going to keep filling this fixed byte 
array as per the cell that is retrieved.
Let me know if I can take this up as this is unassigned. If you are planning to 
work on it, pls feel free to do so.

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from 
> BufferedDataBlockEncoder$SeekerState., where we allocate two byte 
> arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16705:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch.

> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16705-master.patch
>
>
> LongComparator
> @Override
> public int compareTo(byte[] value, int offset, int length) {
>   Long that = Bytes.toLong(value, offset, length);
>   return this.longValue.compareTo(that);
> }
> Every time need to convert long to Long, this is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522125#comment-15522125
 ] 

ramkrishna.s.vasudevan commented on HBASE-16703:


[~apurtell]
I was working on this area long time back regarding the byte[] that was getting 
recreated if the size grows above INITIAL_BUFFER_SIZE. But that was to see if 
we can avoid the recreation. But I think we can atleast avoid the recreation of 
the buffer because per SeekerState we are going to keep filling this fixed byte 
array as per the cell that is retrieved.
Let me know if I can take this up as this is unassigned. If you are planning to 
work on it, pls feel free to do so.

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from 
> BufferedDataBlockEncoder$SeekerState., where we allocate two byte 
> arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522123#comment-15522123
 ] 

Anoop Sam John commented on HBASE-16705:


+1

> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Attachments: HBASE-16705-master.patch
>
>
> LongComparator
> @Override
> public int compareTo(byte[] value, int offset, int length) {
>   Long that = Bytes.toLong(value, offset, length);
>   return this.longValue.compareTo(that);
> }
> Every time need to convert long to Long, this is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522083#comment-15522083
 ] 

Andrew Purtell edited comment on HBASE-16703 at 9/26/16 5:18 AM:
-

Ah, I updated the description on the issue.
Next time I'm working with JFR I'll capture that info.


was (Author: apurtell):
Ah, I updated the description on the issue.
Next time I'm working with JFR I'll capture that info. 

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from 
> BufferedDataBlockEncoder$SeekerState., where we allocate two byte 
> arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16703:
---
Description: In read workloads 35% of the allocation pressure produced by 
servicing RPC requests, when block encoding is enabled, comes from 
BufferedDataBlockEncoder$SeekerState., where we allocate two byte arrays 
of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for object pooling 
of SeekerState here. Subsequent code checks if those byte arrays are sized 
sufficiently to handle incoming data to copy. The arrays will be resized if 
needed.   (was: In read workloads 35% of the allocation pressure produced by 
servicing RPC requests, when block encoding is enabled, comes from 
SeekerState. of the DataBlockEncoder implementation currently in use, 
where we allocate two byte arrays of INITIAL_KEY_BUFFER_SIZE in length. There's 
an opportunity for object pooling of SeekerState here. Subsequent code checks 
if those byte arrays are sized sufficiently to handle incoming data to copy. 
The arrays will be resized if needed. )

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from 
> BufferedDataBlockEncoder$SeekerState., where we allocate two byte 
> arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16703:
---
Description: In read workloads 35% of the allocation pressure produced by 
servicing RPC requests, when block encoding is enabled, comes from 
SeekerState. of the DataBlockEncoder implementation currently in use, 
where we allocate two byte arrays of INITIAL_KEY_BUFFER_SIZE in length. There's 
an opportunity for object pooling of SeekerState here. Subsequent code checks 
if those byte arrays are sized sufficiently to handle incoming data to copy. 
The arrays will be resized if needed.   (was: In read workloads 35% of the 
allocation pressure produced by servicing RPC requests comes from 
SeekerState. of the DataBlockEncoder implementation currently in use, 
where we allocate two byte arrays of INITIAL_KEY_BUFFER_SIZE in length. There's 
an opportunity for object pooling of SeekerState here. Subsequent code checks 
if those byte arrays are sized sufficiently to handle incoming data to copy. 
The arrays will be resized if needed. )

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests, when block encoding is enabled, comes from SeekerState. of 
> the DataBlockEncoder implementation currently in use, where we allocate two 
> byte arrays of INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for 
> object pooling of SeekerState here. Subsequent code checks if those byte 
> arrays are sized sufficiently to handle incoming data to copy. The arrays 
> will be resized if needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522083#comment-15522083
 ] 

Andrew Purtell commented on HBASE-16703:


Ah, I updated the description on the issue.
Next time I'm working with JFR I'll capture that info. 

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests comes from SeekerState. of the DataBlockEncoder implementation 
> currently in use, where we allocate two byte arrays of 
> INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for object pooling 
> of SeekerState here. Subsequent code checks if those byte arrays are sized 
> sufficiently to handle incoming data to copy. The arrays will be resized if 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16587) Procedure v2 - Cleanup suspended proc execution

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522073#comment-15522073
 ] 

Hadoop QA commented on HBASE-16587:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 19s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 1s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.client.TestScannerTimeout |
|   | 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830241/HBASE-16587-v4.patch |
| JIRA Issue | HBASE-16587 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4eacfb9b82b1 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b7e0e15 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HBASE-16684) The get() requests does not see locally buffered put() requests when autoflush is disabled

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522066#comment-15522066
 ] 

ramkrishna.s.vasudevan commented on HBASE-16684:


I see. So even if autoflush is disabled then HBASE-15811 fixes it. Let me check 
it. Thanks for the info.

> The get() requests does not see locally buffered put() requests when 
> autoflush is disabled
> --
>
> Key: HBASE-16684
> URL: https://issues.apache.org/jira/browse/HBASE-16684
> Project: HBase
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Priority: Minor
>
> When autoflush is disabled the put() requests are buffered locally.
> Subsequent get() requests on the same host will always go to the network and 
> will not see the updates that are buffered locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522005#comment-15522005
 ] 

binlijin commented on HBASE-16703:
--

SeekerState is 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder.SeekerState.

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests comes from SeekerState. of the DataBlockEncoder implementation 
> currently in use, where we allocate two byte arrays of 
> INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for object pooling 
> of SeekerState here. Subsequent code checks if those byte arrays are sized 
> sufficiently to handle incoming data to copy. The arrays will be resized if 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16703) Explore object pooling of SeekerState

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521992#comment-15521992
 ] 

stack commented on HBASE-16703:
---

Any pictures to show us [~apurtell] w/ the hot allocations? Or, dumb question, 
where is SeekerState? Thanks.

> Explore object pooling of SeekerState
> -
>
> Key: HBASE-16703
> URL: https://issues.apache.org/jira/browse/HBASE-16703
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>
> In read workloads 35% of the allocation pressure produced by servicing RPC 
> requests comes from SeekerState. of the DataBlockEncoder implementation 
> currently in use, where we allocate two byte arrays of 
> INITIAL_KEY_BUFFER_SIZE in length. There's an opportunity for object pooling 
> of SeekerState here. Subsequent code checks if those byte arrays are sized 
> sufficiently to handle incoming data to copy. The arrays will be resized if 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521985#comment-15521985
 ] 

stack commented on HBASE-16682:
---

+1 on this to fix the build.

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch, HBASE-16682.master.003.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.minikdc.MiniKdc
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)

[jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521978#comment-15521978
 ] 

stack commented on HBASE-16698:
---

On the patch, I'd be good w/ it going in as off by default in branch-1 and on 
by default in master branch.

> Performance issue: handlers stuck waiting for CountDownLatch inside 
> WALKey#getWriteEntry under high writing workload
> 
>
> Key: HBASE-16698
> URL: https://issues.apache.org/jira/browse/HBASE-16698
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.1.6, 1.2.3
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, 
> hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers 
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside 
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by 
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call 
> {{WALKey#getWriteEntry}} after appending edit to WAL, and 
> {{seqNumAssignedLatch}} is only released when the relative append call is 
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). 
> Because currently we're using a single event handler for the ringbuffer, the 
> append calls are handled one by one (actually lot's of our current logic 
> depending on this sequential dealing logic), and this becomes a bottleneck 
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on 
> all regions are dealt with in sequential, which causes contention among 
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, 
> that we could grab the WriteEntry before publishing append onto ringbuffer 
> and use it as sequence id, only that we need to add a lock to make "grab 
> WriteEntry" and "append edit" a transaction. This will still cause contention 
> inside a region but could avoid contention between different regions. This 
> solution is already verified in our online environment and proved to be 
> effective.
> Notice that for master (2.0) branch since we already change the write 
> pipeline to sync before writing memstore (HBASE-15158), this issue only 
> exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521975#comment-15521975
 ] 

stack commented on HBASE-16698:
---

bq. but I don't think it's a good idea doing the same thing for 
doMiniBatchMutate.

Why not? If a false positive and you can't clean it up, add the suppress with 
your justification. Thanks [~carp84]


> Performance issue: handlers stuck waiting for CountDownLatch inside 
> WALKey#getWriteEntry under high writing workload
> 
>
> Key: HBASE-16698
> URL: https://issues.apache.org/jira/browse/HBASE-16698
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.1.6, 1.2.3
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, 
> hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers 
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside 
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by 
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call 
> {{WALKey#getWriteEntry}} after appending edit to WAL, and 
> {{seqNumAssignedLatch}} is only released when the relative append call is 
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). 
> Because currently we're using a single event handler for the ringbuffer, the 
> append calls are handled one by one (actually lot's of our current logic 
> depending on this sequential dealing logic), and this becomes a bottleneck 
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on 
> all regions are dealt with in sequential, which causes contention among 
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, 
> that we could grab the WriteEntry before publishing append onto ringbuffer 
> and use it as sequence id, only that we need to add a lock to make "grab 
> WriteEntry" and "append edit" a transaction. This will still cause contention 
> inside a region but could avoid contention between different regions. This 
> solution is already verified in our online environment and proved to be 
> effective.
> Notice that for master (2.0) branch since we already change the write 
> pipeline to sync before writing memstore (HBASE-15158), this issue only 
> exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16587) Procedure v2 - Cleanup suspended proc execution

2016-09-25 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16587:

Attachment: HBASE-16587-v4.patch

> Procedure v2 - Cleanup suspended proc execution
> ---
>
> Key: HBASE-16587
> URL: https://issues.apache.org/jira/browse/HBASE-16587
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16587-v0.patch, HBASE-16587-v1.patch, 
> HBASE-16587-v2.patch, HBASE-16587-v3.patch, HBASE-16587-v4.patch
>
>
> for procedures like the assignment or the lock one we need to be able to hold 
> on locks while suspended. At the moment the way to do that is up to the proc 
> implementation. This patch moves the logic to the base Procedure and 
> ProcedureExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16684) The get() requests does not see locally buffered put() requests when autoflush is disabled

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521881#comment-15521881
 ] 

stack commented on HBASE-16684:
---

Yeah.  What [~allan163] said. This is a duplicate of hbase-15811?

> The get() requests does not see locally buffered put() requests when 
> autoflush is disabled
> --
>
> Key: HBASE-16684
> URL: https://issues.apache.org/jira/browse/HBASE-16684
> Project: HBase
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Priority: Minor
>
> When autoflush is disabled the put() requests are buffered locally.
> Subsequent get() requests on the same host will always go to the network and 
> will not see the updates that are buffered locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16684) The get() requests does not see locally buffered put() requests when autoflush is disabled

2016-09-25 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521779#comment-15521779
 ] 

Allan Yang commented on HBASE-16684:


This problem should have been fixed in HBASE-15811

> The get() requests does not see locally buffered put() requests when 
> autoflush is disabled
> --
>
> Key: HBASE-16684
> URL: https://issues.apache.org/jira/browse/HBASE-16684
> Project: HBase
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Priority: Minor
>
> When autoflush is disabled the put() requests are buffered locally.
> Subsequent get() requests on the same host will always go to the network and 
> will not see the updates that are buffered locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521721#comment-15521721
 ] 

binlijin commented on HBASE-16692:
--

Thanks for the review [~tedyu] [~anoop.hbase] [~carp84]

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521434#comment-15521434
 ] 

stack commented on HBASE-16608:
---

Did a pass on RB. Looks grand. Have we addressed the issues [~ram_krish] raised 
above? Thanks.

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, 
> HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, 
> HBASE-16417-V10.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521350#comment-15521350
 ] 

Hudson commented on HBASE-16645:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1672 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1672/])
HBASE-16645 Wrong range of Cells is caused by CellFlatMap#tailMap, (tedyu: rev 
b7e0e1578717709fc564832e95fac64a325da6aa)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCellFlatSet.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CellFlatMap.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CellSet.java


> Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap
> --
>
> Key: HBASE-16645
> URL: https://issues.apache.org/jira/browse/HBASE-16645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-16645.v0.patch, HBASE-16645.v1.patch, 
> HBASE-16645.v2.patch
>
>
> Two reasons are shown below:
> 1) CellFlatMap#find doesn’t consider desc order array
> 2) CellFlatMap#getValidIndex return the wrong upper bound



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521348#comment-15521348
 ] 

Hudson commented on HBASE-16692:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1672 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1672/])
HBASE-16692 Make ByteBufferUtils#equals safer and correct (binlijin) (tedyu: 
rev 3896d9ed0a87c77330f3f2c998a6fdafe272e2d6)
* (add) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java


> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-25 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521236#comment-15521236
 ] 

Edward Bortnikov commented on HBASE-16608:
--

Looks like we've converged on this jira, based on the RB status. This jira 
completes HBASE-14921, and makes the index-compaction and data-compaction 
options available to end-user for the first time. 

In parallel, we keep benchmarking and exploring more elaborate in-memory flush 
policies in HBASE-16417.

Can we start voting on this JIRA? [~stack], [~ram_krish], [~anoop.hbase] - mind 
taking another look at the code? 

Many thanks. 


> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, 
> HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, 
> HBASE-16417-V10.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16698) Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload

2016-09-25 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521114#comment-15521114
 ] 

Yu Li commented on HBASE-16698:
---

Checked below failed UT cases in HadoopQA report and confirmed all could pass 
in local:
{noformat}
org.apache.hadoop.hbase.client.TestReplicasClient
org.apache.hadoop.hbase.client.TestFromClientSide
org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor
org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
{noformat}
I have seen some of these failed cases in HadoopQA report for several JIRAs, 
not sure whether any JIRA already track them down.

Regarding the findbugs issue:
{noformat}
Bug type UL_UNRELEASED_LOCK (click for details) 
In class org.apache.hadoop.hbase.regionserver.HRegion
In method 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion$BatchOperation)
At HRegion.java:[line 3262]
{noformat}
I think it's a similar fingbugs false positive like this one in 
[stackoverflow|http://stackoverflow.com/questions/5408940/possible-findbugs-false-positive-of-ul-unreleased-lock-exception-path].
 I could see some methods suppress fingbugs warning through 
{{@edu.umd.cs.findbugs.annotations.SuppressWarnings}} such as 
{{HRegion#doClose}}, but I don't think it's a good idea doing the same thing 
for {{doMiniBatchMutate}}. Any suggestions [~stack]?

> Performance issue: handlers stuck waiting for CountDownLatch inside 
> WALKey#getWriteEntry under high writing workload
> 
>
> Key: HBASE-16698
> URL: https://issues.apache.org/jira/browse/HBASE-16698
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.1.6, 1.2.3
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-16698.patch, HBASE-16698.v2.patch, 
> hadoop0495.et2.jstack
>
>
> As titled, on our production environment we observed 98 out of 128 handlers 
> get stuck waiting for the CountDownLatch {{seqNumAssignedLatch}} inside 
> {{WALKey#getWriteEntry}} under a high writing workload.
> After digging into the problem, we found that the problem is mainly caused by 
> advancing mvcc in the append logic. Below is some detailed analysis:
> Under current branch-1 code logic, all batch puts will call 
> {{WALKey#getWriteEntry}} after appending edit to WAL, and 
> {{seqNumAssignedLatch}} is only released when the relative append call is 
> handled by RingBufferEventHandler (see {{FSWALEntry#stampRegionSequenceId}}). 
> Because currently we're using a single event handler for the ringbuffer, the 
> append calls are handled one by one (actually lot's of our current logic 
> depending on this sequential dealing logic), and this becomes a bottleneck 
> under high writing workload.
> The worst part is that by default we only use one WAL per RS, so appends on 
> all regions are dealt with in sequential, which causes contention among 
> different regions...
> To fix this, we could also take use of the "sequential appends" mechanism, 
> that we could grab the WriteEntry before publishing append onto ringbuffer 
> and use it as sequence id, only that we need to add a lock to make "grab 
> WriteEntry" and "append edit" a transaction. This will still cause contention 
> inside a region but could avoid contention between different regions. This 
> solution is already verified in our online environment and proved to be 
> effective.
> Notice that for master (2.0) branch since we already change the write 
> pipeline to sync before writing memstore (HBASE-15158), this issue only 
> exists for the ASYNC_WAL writes scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16672) Add option for bulk load to always copy hfile(s) instead of renaming

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521100#comment-15521100
 ] 

Hadoop QA commented on HBASE-16672:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 45s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
35s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 158m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.TestMultiVersions |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver |
|   | org.apache.hadoop.hbase.procedure.TestZKProcedureControllers |
|   | org.apache.hadoop.hbase.procedure.TestZKProcedure |
|   | org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor 
|
|   | org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions |
|   | 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | 

[jira] [Commented] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521058#comment-15521058
 ] 

Yu Li commented on HBASE-16692:
---

oops, my bad, was looking into branch-1... Thanks for the confirmation sir.

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521050#comment-15521050
 ] 

Ted Yu commented on HBASE-16696:


Thanks for digging, Ram.

Feel free to adjust the subject if you have a patch.

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521045#comment-15521045
 ] 

ramkrishna.s.vasudevan commented on HBASE-16696:


The test as such is fine and it has infact helped in regression.

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521044#comment-15521044
 ] 

ramkrishna.s.vasudevan commented on HBASE-16696:


I verified the test case and could find the problem. It is because of the 
recent commits
https://issues.apache.org/jira/browse/HBASE-16604.
I will check once again that patch and see if that fix needs to be verified 
once again. This problem will not be seen in other branches because only master 
does the ref counting of blocks and so missing that 'shipped()' call will lead 
to problems.
ScannerResetException has been introduced that does not allow the lease to be 
removed and I think in this case with RpcCallback it calls sets the callback 
but as there is an exception it does not really do the shipped() call.
But i need to verify this once again and wil be back with a patch soon.

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521032#comment-15521032
 ] 

Ted Yu commented on HBASE-16692:


commit 3896d9ed0a87c77330f3f2c998a6fdafe272e2d6

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521020#comment-15521020
 ] 

Yu Li commented on HBASE-16692:
---

Just checked master branch commit history and failed to find this one, mind 
double check whether you pushed the commit sir? Thanks. :-) [~tedyu]

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521007#comment-15521007
 ] 

Hadoop QA commented on HBASE-16608:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 33s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Switch statement found in 
org.apache.hadoop.hbase.regionserver.MemStoreCompactor.doCompaction() where one 
case falls through to the next case  At MemStoreCompactor.java:where one case 
falls through to the next case  At MemStoreCompactor.java:[lines 196-224] |
| Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.client.TestReplicaWithCluster |
|   | 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830225/HBASE-16417-V10.patch 
|
| JIRA Issue | HBASE-16608 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 10216ef28894 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3896d9e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Commented] (HBASE-16679) Flush throughput controller: Minor perf change and fix flaky TestFlushWithThroughputController

2016-09-25 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521008#comment-15521008
 ] 

Yu Li commented on HBASE-16679:
---

Just noticed this one, nice catch and thanks for the patch [~appy]

> Flush throughput controller: Minor perf change and fix flaky 
> TestFlushWithThroughputController
> --
>
> Key: HBASE-16679
> URL: https://issues.apache.org/jira/browse/HBASE-16679
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16679.master.001.patch, 
> HBASE-16679.master.002.patch, HBASE-16679.master.003.patch
>
>
> Minor perf change:
> Calculate maxThroughputPerOperation outside of control() since start()() 
> are called only once per operation, but control can be called 
> hundreds/thousands of time.
> Flaky test:
> Problems in current test:
> - writes only  2.5MB each iteration but control triggers sleep only every 1Mb 
> write (decided by HBASE_HSTORE_FLUSH_THROUGHPUT_CONTROL_CHECK_INTERVAL). 
> Either increase data written in each batch or decreasing this threshold for 
> better throughput control.
> - We shouldn't be timing table disable/delete/create and populating data in 
> throughput calculations.
> See the differences below.
> With patch (total data written 30M)
> run 1:
> Throughput is: 1.0113841089709052 MB/s
> Throughput w/o limit is: 14.665069580078125 MB/s
> With 1M/s limit, flush use 29683ms; without limit, flush use 2130ms
> run 2:
> Throughput is: 1.0113841089709052 MB/s
> Throughput w/o limit is: 14.665069580078125 MB/s
> With 1M/s limit, flush use 29674ms; without limit, flush use 2027ms
> Without patch (total data written 25M)
> run 1:
> Throughput is: 0.921681903523776 MB/s
> Throughput w/o limit is: 4.06833346870301 MB/s
> With 1M/s limit, flush use 27189ms; without limit, flush use 6159ms
> run 2:
> Throughput is: 0.9422982728478803 MB/s
> Throughput w/o limit is: 4.047858424942981 MB/s
> With 1M/s limit, flush use 26594ms; without limit, flush use 6190ms



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520880#comment-15520880
 ] 

Hadoop QA commented on HBASE-16704:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s 
{color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 39s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 6s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830222/HBASE-16704.patch |
| JIRA Issue | HBASE-16704 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8bdd18689d37 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 21969f5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Updated] (HBASE-16672) Add option for bulk load to always copy hfile(s) instead of renaming

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16672:
---
Attachment: 16672.v11.txt

Patch v11 adds the missing @param

> Add option for bulk load to always copy hfile(s) instead of renaming
> 
>
> Key: HBASE-16672
> URL: https://issues.apache.org/jira/browse/HBASE-16672
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16672.v1.txt, 16672.v10.txt, 16672.v11.txt, 
> 16672.v2.txt, 16672.v3.txt, 16672.v4.txt, 16672.v5.txt, 16672.v6.txt, 
> 16672.v7.txt, 16672.v8.txt, 16672.v9.txt
>
>
> This is related to HBASE-14417, to support incrementally restoring to 
> multiple destinations, this issue adds option which would always copy 
> hfile(s) during bulk load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags

2016-09-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520858#comment-15520858
 ] 

Ted Yu commented on HBASE-16704:


lgtm
{code}
576 // From non encoded HFiles, we always read back KeyValue or its 
derives. (Note : When HFile
{code}
derives -> descendant

> Scan will be broken while working with DBE and KeyValueCodecWithTags
> 
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,.  Use Diff DataBlockEncoding
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will be broken while working with DBE and KeyValueCodecWithTags

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16704:
---
Summary: Scan will be broken while working with DBE and 
KeyValueCodecWithTags  (was: Scan will broken while work with DBE and 
KeyValueCodecWithTags)

> Scan will be broken while working with DBE and KeyValueCodecWithTags
> 
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,.  Use Diff DataBlockEncoding
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16705:
---
Priority: Minor  (was: Major)

lgtm

> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Attachments: HBASE-16705-master.patch
>
>
> LongComparator
> @Override
> public int compareTo(byte[] value, int offset, int length) {
>   Long that = Bytes.toLong(value, offset, length);
>   return this.longValue.compareTo(that);
> }
> Every time need to convert long to Long, this is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16645:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch, Chiaping.

Thanks for the review, Anastasia.

> Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap
> --
>
> Key: HBASE-16645
> URL: https://issues.apache.org/jira/browse/HBASE-16645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-16645.v0.patch, HBASE-16645.v1.patch, 
> HBASE-16645.v2.patch
>
>
> Two reasons are shown below:
> 1) CellFlatMap#find doesn’t consider desc order array
> 2) CellFlatMap#getValidIndex return the wrong upper bound



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16692:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch, binlijin

Thanks for the reviews.

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16692) Make ByteBufferUtils#equals safer and correct

2016-09-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16692:
---
Summary: Make ByteBufferUtils#equals safer and correct  (was: Make 
ByteBufferUtils#equals more safe and correct)

> Make ByteBufferUtils#equals safer and correct
> -
>
> Key: HBASE-16692
> URL: https://issues.apache.org/jira/browse/HBASE-16692
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-16692-master.patch, HBASE-16692-master_v2.patch, 
> HBASE-16692-master_v3.patch, HBASE-16692-master_v4.patch, 
> HBASE-16692-master_v5.patch
>
>
> ByteBufferUtils.equals(HConstants.EMPTY_BYTE_BUFFER, 0, 0, 
> HConstants.EMPTY_BYTE_ARRAY, 0, 0) will throw 
> java.lang.ArrayIndexOutOfBoundsException: -1, i think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-25 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-16608:

Attachment: HBASE-16417-V10.patch

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, 
> HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, 
> HBASE-16417-V10.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16665) Check whether KeyValueUtil.createXXX could be replaced by CellUtil without copy

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520744#comment-15520744
 ] 

Hudson commented on HBASE-16665:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1670 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1670/])
HBASE-16665 Check whether KeyValueUtil.createXXX could be replaced by 
(chenheng: rev 21969f5159e6e8f93a7b8f9c7cfe2f359f11dd27)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/SweepReducer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MemStoreWrapper.java


> Check whether KeyValueUtil.createXXX could be replaced by CellUtil without 
> copy
> ---
>
> Key: HBASE-16665
> URL: https://issues.apache.org/jira/browse/HBASE-16665
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0
>
> Attachments: HBASE-16665.patch, HBASE-16665.v1.patch, 
> HBASE-16665.v2.patch, HBASE-16665.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520708#comment-15520708
 ] 

Anoop Sam John commented on HBASE-16696:


Ram this is the case of flaky tests only?  Or any code path is broken?

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will broken while work with DBE and KeyValueCodecWithTags

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16704:
---
Fix Version/s: 2.0.0

> Scan will broken while work with DBE and KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,.  Use Diff DataBlockEncoding
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will broken while work with DBE and KeyValueCodecWithTags

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16704:
---
Summary: Scan will broken while work with DBE and KeyValueCodecWithTags  
(was: Scan will broken while work with KeyValueCodecWithTags)

> Scan will broken while work with DBE and KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,.  Use Diff DataBlockEncoding
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will broken while work with KeyValueCodecWithTags

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16704:
---
Description: 
scan will always broken if we set LIMIT more than 1 with rs  
hbase.client.rpc.codec set to 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.

How to reproduce:
1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
some data with ycsb,.  Use Diff DataBlockEncoding
3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
set any valid start row.
4. scan failed.

this should be bug in KeyValueCodecWithTags, after some investigations, I found 
some the key not serialized correctly.



  was:
scan will always broken if we set LIMIT more than 1 with rs  
hbase.client.rpc.codec set to 
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.

How to reproduce:
1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
some data with ycsb,
3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
set any valid start row.
4. scan failed.

this should be bug in KeyValueCodecWithTags, after some investigations, I found 
some the key not serialized correctly.


> Scan will broken while work with KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,.  Use Diff DataBlockEncoding
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will broken while work with KeyValueCodecWithTags

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16704:
---
Status: Patch Available  (was: Open)

> Scan will broken while work with KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16704) Scan will broken while work with KeyValueCodecWithTags

2016-09-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16704:
---
Attachment: HBASE-16704.patch

> Scan will broken while work with KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
> Attachments: HBASE-16704.patch
>
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520688#comment-15520688
 ] 

ramkrishna.s.vasudevan commented on HBASE-16696:


I saw multiple issues around this. I will take this up and fix it by tomorrow 
my time.

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16696) TestBlockEvictionFromClient fails in master branch

2016-09-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-16696:
--

Assignee: ramkrishna.s.vasudevan

> TestBlockEvictionFromClient fails in master branch
> --
>
> Key: HBASE-16696
> URL: https://issues.apache.org/jira/browse/HBASE-16696
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
> Attachments: build-1638.out, build-1639.out
>
>
> TestBlockEvictionFromClient consistently fails in master branch.
> From existing Jenkins builds, looks like this started with build 1639.
> See attached Jenkins console logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520629#comment-15520629
 ] 

Anoop Sam John commented on HBASE-16643:


bq.It appears to me that you assume first action for forward scan is 
seek/reseek() and first action for reverse scan is 
backwardSeek/seekToPreviousRow(). This helps you to avoid using two heaps 
variants.
Ya this is what Ram checked and confirmed..  I dont think any logic abt first 
calling seek / backwardSeek has changed recently. This is the core of the read 
flow and very rarely change.  Ram pls double confirm.

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch, HBASE-16643_6.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520574#comment-15520574
 ] 

Hadoop QA commented on HBASE-16705:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830220/HBASE-16705-master.patch
 |
| JIRA Issue | HBASE-16705 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 24bc4c009af1 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 21969f5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3709/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3709/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>

[jira] [Assigned] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-25 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai reassigned HBASE-16645:
-

Assignee: ChiaPing Tsai

> Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap
> --
>
> Key: HBASE-16645
> URL: https://issues.apache.org/jira/browse/HBASE-16645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-16645.v0.patch, HBASE-16645.v1.patch, 
> HBASE-16645.v2.patch
>
>
> Two reasons are shown below:
> 1) CellFlatMap#find doesn’t consider desc order array
> 2) CellFlatMap#getValidIndex return the wrong upper bound



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-25 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520570#comment-15520570
 ] 

ChiaPing Tsai commented on HBASE-16645:
---

[~anastas]

Thanks for your review.

> Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap
> --
>
> Key: HBASE-16645
> URL: https://issues.apache.org/jira/browse/HBASE-16645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-16645.v0.patch, HBASE-16645.v1.patch, 
> HBASE-16645.v2.patch
>
>
> Two reasons are shown below:
> 1) CellFlatMap#find doesn’t consider desc order array
> 2) CellFlatMap#getValidIndex return the wrong upper bound



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16705:
-
Status: Patch Available  (was: Open)

> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-16705-master.patch
>
>
> LongComparator
> @Override
> public int compareTo(byte[] value, int offset, int length) {
>   Long that = Bytes.toLong(value, offset, length);
>   return this.longValue.compareTo(that);
> }
> Every time need to convert long to Long, this is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16705:
-
Attachment: HBASE-16705-master.patch

> Eliminate long to Long auto boxing in LongComparator
> 
>
> Key: HBASE-16705
> URL: https://issues.apache.org/jira/browse/HBASE-16705
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 2.0..
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-16705-master.patch
>
>
> LongComparator
> @Override
> public int compareTo(byte[] value, int offset, int length) {
>   Long that = Bytes.toLong(value, offset, length);
>   return this.longValue.compareTo(that);
> }
> Every time need to convert long to Long, this is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16705) Eliminate long to Long auto boxing in LongComparator

2016-09-25 Thread binlijin (JIRA)
binlijin created HBASE-16705:


 Summary: Eliminate long to Long auto boxing in LongComparator
 Key: HBASE-16705
 URL: https://issues.apache.org/jira/browse/HBASE-16705
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 2.0..
Reporter: binlijin
Assignee: binlijin
 Attachments: HBASE-16705-master.patch

LongComparator
@Override
public int compareTo(byte[] value, int offset, int length) {
  Long that = Bytes.toLong(value, offset, length);
  return this.longValue.compareTo(that);
}
Every time need to convert long to Long, this is not necessary.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16645) Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap

2016-09-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520515#comment-15520515
 ] 

Anastasia Braginsky commented on HBASE-16645:
-

Hi [~chia7712] and [~tedyu],

Thank you for considering my comments. I reviewed the recent version of the 
code on the Review Board, the patch is good!
Anastasia

> Wrong range of Cells is caused by CellFlatMap#tailMap, headMap, and SubMap
> --
>
> Key: HBASE-16645
> URL: https://issues.apache.org/jira/browse/HBASE-16645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-16645.v0.patch, HBASE-16645.v1.patch, 
> HBASE-16645.v2.patch
>
>
> Two reasons are shown below:
> 1) CellFlatMap#find doesn’t consider desc order array
> 2) CellFlatMap#getValidIndex return the wrong upper bound



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520507#comment-15520507
 ] 

Anastasia Braginsky commented on HBASE-16643:
-

Hi [~ram_krish],

I have just reviewed the recent patch version you published on the review 
board, and left some comments there. 
Although all this refactoring wasn't needed to resolve the bug explained in the 
title, I am OK with this refactoring.
I like removing the "synchronized" I wanted to do it myself, but I wasn't brave 
enough  :)

It appears to me that you assume first action for forward scan is seek/reseek() 
and first action for reverse scan is backwardSeek/seekToPreviousRow(). This 
helps you to avoid using two heaps variants.
I recall I tried to do this myself, but it didn't work for me. Maybe something 
changed since them or you just arrange it better.
Bottom line, I am OK with this change.
Please consider two small comments I Ieft on the review board.

Thanks,
Anastasia

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch, HBASE-16643_6.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520383#comment-15520383
 ] 

Hudson commented on HBASE-16677:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1669 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1669/])
HBASE-16677 Add table size (total store file size) to table page (chenheng: rev 
f5351e2dbd29ab32dbd4044844feb6a94d9fea98)
* (edit) hbase-server/src/main/resources/hbase-webapps/master/table.jsp
Revert "HBASE-16677 Add table size (total store file size) to table (chenheng: 
rev b14fb14886686d3135f718ff7e067230ff7d62fc)
* (edit) hbase-server/src/main/resources/hbase-webapps/master/table.jsp
HBASE-16677 Add table size (total store file size) to table page (Guang 
(chenheng: rev f7bb6fbf21a6a86700b8411311343f0be80ebf3f)
* (edit) hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Assignee: Guang Yang
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, HBASE-16677_v3.patch, mini_cluster_master.png, 
> prod_cluster_partial.png, table_page_v3.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16667) Building with JDK 8: ignoring option MaxPermSize=256m

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520325#comment-15520325
 ] 

Hudson commented on HBASE-16667:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #32 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/32/])
HBASE-16667 Building with JDK 8: ignoring option MaxPermSize=256m (Niels 
(jerryjch: rev f224e09ad9e5e18a31e14e2606bdefba5b901216)
* (edit) hbase-it/pom.xml
* (edit) pom.xml


> Building with JDK 8: ignoring option MaxPermSize=256m
> -
>
> Key: HBASE-16667
> URL: https://issues.apache.org/jira/browse/HBASE-16667
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Niels Basjes
>Assignee: Niels Basjes
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>
> Attachments: HBASE-16667-01.patch, HBASE-16667-branch-1-v1.patch, 
> HBASE-16667-branch-1-v2.patch, HBASE-16667-v2.patch, HBASE-16667-v3.patch
>
>
> In JDK 8 the permgen was removed.
> As a consequence the build shows this line a lot of times, cluttering the 
> output.
> {quote}
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support 
> was removed in 8.0
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16667) Building with JDK 8: ignoring option MaxPermSize=256m

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520320#comment-15520320
 ] 

Hudson commented on HBASE-16667:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #23 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/23/])
HBASE-16667 Building with JDK 8: ignoring option MaxPermSize=256m (Niels 
(jerryjch: rev 2926a665ab75bc8da6c57a65f9c12528cd4ff992)
* (edit) hbase-it/pom.xml
* (edit) pom.xml


> Building with JDK 8: ignoring option MaxPermSize=256m
> -
>
> Key: HBASE-16667
> URL: https://issues.apache.org/jira/browse/HBASE-16667
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Niels Basjes
>Assignee: Niels Basjes
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>
> Attachments: HBASE-16667-01.patch, HBASE-16667-branch-1-v1.patch, 
> HBASE-16667-branch-1-v2.patch, HBASE-16667-v2.patch, HBASE-16667-v3.patch
>
>
> In JDK 8 the permgen was removed.
> As a consequence the build shows this line a lot of times, cluttering the 
> output.
> {quote}
> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support 
> was removed in 8.0
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520319#comment-15520319
 ] 

Hudson commented on HBASE-16604:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #23 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/23/])
HBASE-16604 Scanner retries on IOException can cause the scans to miss 
(jerryjch: rev 49a4980e6dac1e74275ae5b042b01cd27efc8ebd)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DelegatingKeyValueScanner.java


> Scanner retries on IOException can cause the scans to miss data 
> 
>
> Key: HBASE-16604
> URL: https://issues.apache.org/jira/browse/HBASE-16604
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: HBASE-16604-branch-1.3-addendum.patch, 
> hbase-16604_v1.patch, hbase-16604_v2.patch, hbase-16604_v3.branch-1.patch, 
> hbase-16604_v3.patch
>
>
> Debugging an ITBLL failure, where the Verify did not "see" all the data in 
> the cluster, I've noticed that if we end up getting a generic IOException 
> from the HFileReader level, we may end up missing the rest of the data in the 
> region. I was able to manually test this, and this stack trace helps to 
> understand what is going on: 
> {code}
> 2016-09-09 16:27:15,633 INFO  [hconnection-0x71ad3d8a-shared--pool21-t9] 
> client.ScannerCallable(376): Open scanner=1 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region 
> region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee.,
>  hostname=hw10676,51833,1473463626529, seqNum=2
> 2016-09-09 16:27:15,634 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: 
> 100 close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true renew: false
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2510): Rolling back next call seqId
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2565): Throwing new 
> ServiceExceptionjava.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0
> 2016-09-09 16:27:15,635 DEBUG 
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] ipc.CallRunner(110): 
> B.fifo.QRpcServer.handler=5,queue=0,port=51833: callId: 26 service: 
> ClientService methodName: Scan size: 26 connection: 192.168.42.75:51903
> java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
> reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> 

[jira] [Updated] (HBASE-16665) Check whether KeyValueUtil.createXXX could be replaced by CellUtil without copy

2016-09-25 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16665:
--
   Resolution: Fixed
 Assignee: Heng Chen
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

> Check whether KeyValueUtil.createXXX could be replaced by CellUtil without 
> copy
> ---
>
> Key: HBASE-16665
> URL: https://issues.apache.org/jira/browse/HBASE-16665
> Project: HBase
>  Issue Type: Bug
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0
>
> Attachments: HBASE-16665.patch, HBASE-16665.v1.patch, 
> HBASE-16665.v2.patch, HBASE-16665.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15215) TestBlockEvictionFromClient is flaky in jdk1.7 build

2016-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520310#comment-15520310
 ] 

Hadoop QA commented on HBASE-15215:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} HBASE-15215 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789529/HBASE-15215_offheap.patch
 |
| JIRA Issue | HBASE-15215 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3708/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestBlockEvictionFromClient is flaky in jdk1.7 build
> 
>
> Key: HBASE-15215
> URL: https://issues.apache.org/jira/browse/HBASE-15215
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15215_offheap.patch
>
>
> This is the 2nd time I am noticing this. 
> {code}
> Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 76.187 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.client.TestBlockEvictionFromClient
> testReverseScanWithCompaction(org.apache.hadoop.hbase.client.TestBlockEvictionFromClient)
>   Time elapsed: 5.812 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testScanWithCompactionInternals(TestBlockEvictionFromClient.java:922)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testReverseScanWithCompaction(TestBlockEvictionFromClient.java:857)
> {code}
> Generally the jdk1.8 build does not have this failure. Need to investigate 
> the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15215) TestBlockEvictionFromClient is flaky in jdk1.7 build

2016-09-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520307#comment-15520307
 ] 

Appy commented on HBASE-15215:
--

It's failing like crazy now. 
(https://builds.apache.org/job/HBase-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html)
[~stack] [~ram_krish] ptal.

> TestBlockEvictionFromClient is flaky in jdk1.7 build
> 
>
> Key: HBASE-15215
> URL: https://issues.apache.org/jira/browse/HBASE-15215
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15215_offheap.patch
>
>
> This is the 2nd time I am noticing this. 
> {code}
> Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 76.187 sec 
> <<< FAILURE! - in org.apache.hadoop.hbase.client.TestBlockEvictionFromClient
> testReverseScanWithCompaction(org.apache.hadoop.hbase.client.TestBlockEvictionFromClient)
>   Time elapsed: 5.812 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testScanWithCompactionInternals(TestBlockEvictionFromClient.java:922)
>   at 
> org.apache.hadoop.hbase.client.TestBlockEvictionFromClient.testReverseScanWithCompaction(TestBlockEvictionFromClient.java:857)
> {code}
> Generally the jdk1.8 build does not have this failure. Need to investigate 
> the failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16664) Timeout logic in AsyncProcess is broken

2016-09-25 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520300#comment-15520300
 ] 

Heng Chen commented on HBASE-16664:
---

Not sure what is your patch want to do.  

But if you want the rpcTimeout could work,  it seems you could just modify the 
CancellableRegionServerCallable remaining time logic.  Not much code as my 
thought.

> Timeout logic in AsyncProcess is broken
> ---
>
> Key: HBASE-16664
> URL: https://issues.apache.org/jira/browse/HBASE-16664
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16664-v1.patch, testhcm.patch
>
>
> Have not checked the root cause, but I think timeout of all operations in 
> AsyncProcess is broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16664) Timeout logic in AsyncProcess is broken

2016-09-25 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520289#comment-15520289
 ] 

Heng Chen commented on HBASE-16664:
---

{quote}
The tracker must be started from beginning, not each call.
{quote}
There is no difference start from begining and do it each call,  the logic has 
been controlled in tracker.start inside.

{quote}
And in fact we will create new CancellableRegionServerCallable in each 
retrying, so the operation timeout is broken. 
{quote}
NO,  callable is created outside of AP for delete, mutate. Only batch callable 
will be created each thread.

{quote}
My idea is pass a deadline (currentTime+operationTimeout) when we submit, we 
just check the remaining time and get min of remaining and rpcTimeout for each 
call.
{quote}
It seems you just need to control remaining time for each call with remaining 
operation time and rpcTimeOut




> Timeout logic in AsyncProcess is broken
> ---
>
> Key: HBASE-16664
> URL: https://issues.apache.org/jira/browse/HBASE-16664
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16664-v1.patch, testhcm.patch
>
>
> Have not checked the root cause, but I think timeout of all operations in 
> AsyncProcess is broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16704) Scan will broken while work with KeyValueCodecWithTags

2016-09-25 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15520283#comment-15520283
 ] 

binlijin commented on HBASE-16704:
--

Nice finding!

> Scan will broken while work with KeyValueCodecWithTags
> --
>
> Key: HBASE-16704
> URL: https://issues.apache.org/jira/browse/HBASE-16704
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Anoop Sam John
>
> scan will always broken if we set LIMIT more than 1 with rs  
> hbase.client.rpc.codec set to 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags.
> How to reproduce:
> 1. 1 master + 1 rs, codec use KeyValueCodecWithTags.
> 2.  create a table table_1024B_30g,1 cf and with only 1 qualifier, then load 
> some data with ycsb,
> 3. scan 'table_1024B_30g', {LIMIT => 2, STARTROW => 'user5499'}, STARTROW  is 
> set any valid start row.
> 4. scan failed.
> this should be bug in KeyValueCodecWithTags, after some investigations, I 
> found some the key not serialized correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)