[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530006#comment-14530006
 ] 

Anoop Sam John commented on HBASE-13579:


In both 0.98 and branch-1 patches
HFileReaderV2
 protected Cell formKeyValue() {
Why this is added as protected?  I can not see this is extended.
In reader V2 we don't have Tags. Blindly return NoTagsKeyValue then.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-13562:

Attachment: sample.patch

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530040#comment-14530040
 ] 

ramkrishna.s.vasudevan commented on HBASE-13579:


[~apurtell]
You can check this and commit to 0.98 branch. I will leave this open till then. 
 Pls feel feel to close this with or without 0.98 tag.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530038#comment-14530038
 ] 

Ashish Singhi commented on HBASE-13562:
---

bq. Does it address your concerns?
Which concern you mean ?

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530067#comment-14530067
 ] 

Andrew Purtell commented on HBASE-13628:


Thanks for the fix! 

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13251) Correct 'HBase, MapReduce, and the CLASSPATH' section in HBase Ref Guide

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530083#comment-14530083
 ] 

Hudson commented on HBASE-13251:


FAILURE: Integrated in HBase-TRUNK #6458 (See 
[https://builds.apache.org/job/HBase-TRUNK/6458/])
HBASE-13251 Correct HBase, MapReduce, and the CLASSPATH section in HBase Ref 
Guide (li xiang) (jerryjch: rev 664b2e4f11a06af2bc6d4876a3d6ed270b28e898)
* hbase-protocol/src/main/java/org/apache/hadoop/hbase/util/ByteStringer.java
* src/main/asciidoc/_chapters/mapreduce.adoc


 Correct 'HBase, MapReduce, and the CLASSPATH' section in HBase Ref Guide
 

 Key: HBASE-13251
 URL: https://issues.apache.org/jira/browse/HBASE-13251
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Jerry He
Assignee: li xiang
  Labels: documentation
 Fix For: 2.0.0

 Attachments: HBASE-13251-v1.patch, HBASE-13251-v2.patch


 As [~busbey] pointed out in HBASE-13149, we have a section HBase, MapReduce, 
 and the CLASSPATH in the HBase Ref Guide.
 http://hbase.apache.org/book.html#hbase.mapreduce.classpath
 There are duplication, errors and misinformation in the section.
 Need to cleanup and polish it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13510) Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530140#comment-14530140
 ] 

ramkrishna.s.vasudevan commented on HBASE-13510:


Ping for reviews here!!!

 Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL
 -

 Key: HBASE-13510
 URL: https://issues.apache.org/jira/browse/HBASE-13510
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13510_1.patch


 In order to address the comments over in HBASE-10800 related to comparing 
 Cell with a serialized KV's key we had some need for that in Bloom filters.  
 After discussing with Anoop, we found that it may be possible to 
 remove/modify some of the APIs in the BloomFilter interfaces and for doing 
 that we can purge ByteBloomFilter.  
 I read the code and found that ByteBloomFilter was getting used in V1 version 
 only.  Now as it is obsolete we can remove this code and move some of the 
 static APIs in ByteBloomFilter to some other util class or bloom related 
 classes which will help us in refactoring the code too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13593) Quota support for namespace should take restore and clone snapshot into account

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530172#comment-14530172
 ] 

Ashish Singhi commented on HBASE-13593:
---

{quote}
-1 core tests. The patch failed these unit tests:
 org.apache.hadoop.hbase.util.TestProcessBasedCluster
 org.apache.hadoop.hbase.mapreduce.TestImportExport
{quote}
These tests looks like flakey they are failing 
[here|https://issues.apache.org/jira/browse/HBASE-13358?focusedCommentId=14522780page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14522780],
 [here 
also|https://issues.apache.org/jira/browse/HBASE-13609?focusedCommentId=14529222page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14529222]

 Quota support for namespace should take restore and clone snapshot into 
 account
 ---

 Key: HBASE-13593
 URL: https://issues.apache.org/jira/browse/HBASE-13593
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 1.1.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: 13593-v3.patch, HBASE-13593-branch-1.patch, 
 HBASE-13593-v1-.patch, HBASE-13593-v2.patch, HBASE-13593-v3.patch, 
 HBASE-13593.patch


 Quota support for namespace should take restore and clone snapshot into 
 account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530215#comment-14530215
 ] 

Hudson commented on HBASE-13579:


FAILURE: Integrated in HBase-1.1 #468 (See 
[https://builds.apache.org/job/HBase-1.1/468/])
HBASE-13579 - Avoid isCellTTLExpired() for NO-TAG cases (Ram) (ramkrishna: rev 
1270698f69c06c9b3cfb120d06a08372a63fc3c5)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/NoTagsKeyValue.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java


 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530214#comment-14530214
 ] 

Hudson commented on HBASE-13628:


FAILURE: Integrated in HBase-1.1 #468 (See 
[https://builds.apache.org/job/HBase-1.1/468/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev ca8f59ee64f4f2410ffe49259bb7b0eef1ab1eee)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530233#comment-14530233
 ] 

Ashish Singhi commented on HBASE-13562:
---

Attached v1 patch which covers all the permission check for the operations in 
master interface in TestAccessController class.
Please review.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562-v1.patch, HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530156#comment-14530156
 ] 

Hudson commented on HBASE-13579:


SUCCESS: Integrated in HBase-1.2 #61 (See 
[https://builds.apache.org/job/HBase-1.2/61/])
HBASE-13579 - Avoid isCellTTLExpired() for NO-TAG cases (Ram) (ramkrishna: rev 
426c7eef09af0a4f306c6cc0f70f994b01f68ad6)
* hbase-common/src/main/java/org/apache/hadoop/hbase/NoTagsKeyValue.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java


 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13510) Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530286#comment-14530286
 ] 

ramkrishna.s.vasudevan commented on HBASE-13510:


bq.The CompoundBloomFilterWriter uses ByteBloomFilter chunk and its state. Can 
we move these states to CompoundBloomFilter or so? There are some static 
methods in ByteBloomFilter which is used from other places, that also we can 
move into other appropriate places. 
In this patch still ByteBloomFilter is a BloomFilterWriter. My thinking is we 
can avoid that also.
I had a patch where all the static were moved to BloomfilterUtils.  But I think 
it was suggested that will make the patch bigger.  So the chunk that we create 
as a ByteBloomfilter should direclty be a BloomFilterWriter and we should call 
it BloomFilterWriterImpl

 Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL
 -

 Key: HBASE-13510
 URL: https://issues.apache.org/jira/browse/HBASE-13510
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13510_1.patch


 In order to address the comments over in HBASE-10800 related to comparing 
 Cell with a serialized KV's key we had some need for that in Bloom filters.  
 After discussing with Anoop, we found that it may be possible to 
 remove/modify some of the APIs in the BloomFilter interfaces and for doing 
 that we can purge ByteBloomFilter.  
 I read the code and found that ByteBloomFilter was getting used in V1 version 
 only.  Now as it is obsolete we can remove this code and move some of the 
 static APIs in ByteBloomFilter to some other util class or bloom related 
 classes which will help us in refactoring the code too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13593) Quota support for namespace should take restore and clone snapshot into account

2015-05-06 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13593:
--
Fix Version/s: 1.1.1
   1.2.0

 Quota support for namespace should take restore and clone snapshot into 
 account
 ---

 Key: HBASE-13593
 URL: https://issues.apache.org/jira/browse/HBASE-13593
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 1.1.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: 13593-v3.patch, HBASE-13593-branch-1.patch, 
 HBASE-13593-v1-.patch, HBASE-13593-v2.patch, HBASE-13593-v3.patch, 
 HBASE-13593.patch


 Quota support for namespace should take restore and clone snapshot into 
 account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530247#comment-14530247
 ] 

Hudson commented on HBASE-13628:


FAILURE: Integrated in HBase-TRUNK #6459 (See 
[https://builds.apache.org/job/HBase-TRUNK/6459/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev 652929c0ff8c8cec1e86ded834f3e770422b2ace)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530281#comment-14530281
 ] 

Hadoop QA commented on HBASE-13562:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730739/sample.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730739

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+grantOnNamespace(TEST_UTIL, USER_NS_ADMIN.getShortName(), 
TEST_NAMESPACE, Permission.Action.ADMIN);
+grantOnNamespace(TEST_UTIL, USER_NS_CREATE.getShortName(), TEST_NAMESPACE, 
Permission.Action.CREATE);
+grantOnNamespace(TEST_UTIL, USER_NS_WRITE.getShortName(), TEST_NAMESPACE, 
Permission.Action.WRITE);
+grantOnNamespace(TEST_UTIL, USER_NS_READ.getShortName(), TEST_NAMESPACE, 
Permission.Action.READ);
+grantOnNamespace(TEST_UTIL, USER_NS_EXEC.getShortName(), TEST_NAMESPACE, 
Permission.Action.EXEC);
+revokeFromNamespace(TEST_UTIL, USER_NS_ADMIN.getShortName(), 
TEST_NAMESPACE, Permission.Action.ADMIN);
+revokeFromNamespace(TEST_UTIL, USER_NS_CREATE.getShortName(), 
TEST_NAMESPACE, Permission.Action.CREATE);
+revokeFromNamespace(TEST_UTIL, USER_NS_WRITE.getShortName(), 
TEST_NAMESPACE, Permission.Action.WRITE);
+revokeFromNamespace(TEST_UTIL, USER_NS_READ.getShortName(), 
TEST_NAMESPACE, Permission.Action.READ);
+revokeFromNamespace(TEST_UTIL, USER_NS_EXEC.getShortName(), 
TEST_NAMESPACE, Permission.Action.EXEC);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13957//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13957//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13957//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13957//console

This message is automatically generated.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562-v1.patch, HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13366) Throw DoNotRetryIOException instead of read only IOException

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530157#comment-14530157
 ] 

Hudson commented on HBASE-13366:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #931 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/931/])
HBASE-13366 Throw DoNotRetryIOException instead of read only IOException 
(Shaohui Liu) (apurtell: rev 593db050500e69bb87d5666ac235e1588d0f268b)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


 Throw DoNotRetryIOException instead of read only IOException
 

 Key: HBASE-13366
 URL: https://issues.apache.org/jira/browse/HBASE-13366
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13366-v1.diff


 Currently, the read only region just throws an IOException to the clients who 
 send write requests to it. This will cause the clients retry for configured 
 times or until operation timeout.
 Changing this exception to DoNotRetryIOException will make the client failed 
 fast.
 Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-1989) Admin (et al.) not accurate with Column vs. Column-Family usage

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530174#comment-14530174
 ] 

Anoop Sam John commented on HBASE-1989:
---

bq.void deleteColumnFamily(final TableName tableName, final byte[] columnName) 
throws IOException
Can you make the param name as 'columnFamily '

HBaseAdmin.java
{quote}
addColumnFamily(final byte[] tableName, HColumnDescriptor columnFamily)
addColumnFamily(final String tableName, HColumnDescriptor columnFamily)
{quote}
May be no need to add newly. Just deprecate its counterparts with replacement 
as 
addColumnFamily(final TableName tableName, final HColumnDescriptor columnFamily)
(?)
Same case applicable for delete/modify

 Admin (et al.) not accurate with Column vs. Column-Family usage
 ---

 Key: HBASE-1989
 URL: https://issues.apache.org/jira/browse/HBASE-1989
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.20.1, 0.90.1
Reporter: Doug Meil
Assignee: Lars Francke
Priority: Minor
 Attachments: HBASE-1989.patch, hbase1989.patch


 Consider the classes Admin and HColumnDescriptor.
 HColumnDescriptor is really referring to a column family and not a column 
 (i.e., family:qualifer).
 Likewise, in Admin there is a method called addColumn that takes an 
 HColumnDescriptor instance.
 I labeled this a bug in the sense that it produces conceptual confusion 
 because there is a big difference between a column and column-family in HBase 
 and these terms should be used consistently.  The code works, though.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13510) Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530265#comment-14530265
 ] 

Anoop Sam John commented on HBASE-13510:


{code}
public KeyValue createBloomKeyValue(byte[] rowBuf, int rowOffset, int rowLen,
byte[] qualBuf, int qualOffset, int qualLen) {
// Ideally should not be called here
return null;
}
{code}
Agree that we will never get a call to here. Still it looks a problematic 
statement.  My suggestion would be to get rid of ByteBloomFilter. The 
CompoundBloomFilterWriter uses ByteBloomFilter chunk and its state. Can we move 
these states to CompoundBloomFilter or so?   There are some static methods in 
ByteBloomFilter which is used from other places, that also we can move into 
other appropriate places. 
In this patch still ByteBloomFilter is a BloomFilterWriter. My thinking is we 
can avoid that also.

 Refactor Bloom filters to make use of Cell Comparators in case of ROW_COL
 -

 Key: HBASE-13510
 URL: https://issues.apache.org/jira/browse/HBASE-13510
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13510_1.patch


 In order to address the comments over in HBASE-10800 related to comparing 
 Cell with a serialized KV's key we had some need for that in Bloom filters.  
 After discussing with Anoop, we found that it may be possible to 
 remove/modify some of the APIs in the BloomFilter interfaces and for doing 
 that we can purge ByteBloomFilter.  
 I read the code and found that ByteBloomFilter was getting used in V1 version 
 only.  Now as it is obsolete we can remove this code and move some of the 
 static APIs in ByteBloomFilter to some other util class or bloom related 
 classes which will help us in refactoring the code too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530299#comment-14530299
 ] 

Hadoop QA commented on HBASE-13579:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12730740/HBASE-13579_0.98_1.patch
  against 0.98 branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730740

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
25 warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3833 checkstyle errors (more than the master's current 3831 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13956//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13956//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13956//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13956//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13956//console

This message is automatically generated.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-13632.

  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to branch-1+ and 0.98. Thanks for the review.

 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530201#comment-14530201
 ] 

Hadoop QA commented on HBASE-13579:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12730735/HBASE-13579_branch-1_1.patch
  against branch-1.1 branch at commit 664b2e4f11a06af2bc6d4876a3d6ed270b28e898.
  ATTACHMENT ID: 12730735

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3800 checkstyle errors (more than the master's current 3799 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13955//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13955//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13955//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13955//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13955//console

This message is automatically generated.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530183#comment-14530183
 ] 

Hudson commented on HBASE-13579:


SUCCESS: Integrated in HBase-1.0 #903 (See 
[https://builds.apache.org/job/HBase-1.0/903/])
HBASE-13579 - Avoid isCellTTLExpired() for NO-TAG cases (Ram) (ramkrishna: rev 
76558b2b55015e3b75259cc0e8b0530a4c590771)
* hbase-common/src/main/java/org/apache/hadoop/hbase/NoTagsKeyValue.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java


 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530182#comment-14530182
 ] 

Hudson commented on HBASE-13628:


SUCCESS: Integrated in HBase-1.0 #903 (See 
[https://builds.apache.org/job/HBase-1.0/903/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev 75d08ce6d2e5acbefa9f0afd6628db74bafdcce4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13562:
--
Attachment: HBASE-13562-v1.patch

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562-v1.patch, HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530018#comment-14530018
 ] 

ramkrishna.s.vasudevan commented on HBASE-13579:


Updated branch-1 patch.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13579:
---
Attachment: HBASE-13579_branch-1_1.patch

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12957) region_mover#isSuccessfulScan may be extremely slow on region with lots of expired data

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530035#comment-14530035
 ] 

Hudson commented on HBASE-12957:


FAILURE: Integrated in HBase-1.0 #902 (See 
[https://builds.apache.org/job/HBase-1.0/902/])
HBASE-12957 region_mover#isSuccessfulScan may be extremely slow on region with 
lots of expired data. (hongyu bi) (larsh: rev 
be397afc15f85be14dfb6a13473491fbe04ff5fe)
* bin/region_mover.rb


 region_mover#isSuccessfulScan may be extremely slow on region with lots of 
 expired data
 ---

 Key: HBASE-12957
 URL: https://issues.apache.org/jira/browse/HBASE-12957
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Reporter: hongyu bi
Assignee: hongyu bi
Priority: Minor
 Fix For: 2.0.0, 1.1.0, 0.98.11, 1.0.2

 Attachments: HBASE-12957-v0.patch


 region_mover will call isSuccessfulScan when region has moved to make sure 
 it's healthy, however , if the moved region has lots of expired data 
 region_mover#isSuccessfulScan will take a long time to finish ,that may even 
 exceed lease timeout.So I made isSuccessfulScan a get-like scan to achieve 
 faster response in that case. 
 workaround:before graceful_stop/rolling_restart ,call major_compact on the 
 table with small TTL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-13628:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to all branches.

Thanks [~apurtell] and [~stack].

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530090#comment-14530090
 ] 

Anoop Sam John commented on HBASE-13632:


+1

 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13448:
---
Fix Version/s: 2.0.0

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-13448.patch, HBASE-13448_V2.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13448:
---
Attachment: HBASE-13448_V2.patch

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-13448.patch, HBASE-13448_V2.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13448:
---
Status: Patch Available  (was: Open)

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-13448.patch, HBASE-13448_V2.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530023#comment-14530023
 ] 

Anoop Sam John commented on HBASE-13579:


+1

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13428) Migration to hbase-2.0.0

2015-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13428:
---
Fix Version/s: 2.0.0

 Migration to hbase-2.0.0
 

 Key: HBASE-13428
 URL: https://issues.apache.org/jira/browse/HBASE-13428
 Project: HBase
  Issue Type: Umbrella
  Components: migration
Reporter: stack
 Fix For: 2.0.0


 Opening a 2.0 umbrella migration issue. Lets hang off this one any tools and 
 expectations migrating from 1.0 (or earlier) to 2.0. So far there are none 
 that I know of though there is an expectation in HBASE-13373 that hfiles are 
 at least major version 2 and minor version 3.  Lets list all such 
 expectations, etc., here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530032#comment-14530032
 ] 

Srikanth Srungarapu commented on HBASE-13562:
-

Fair enough...What do you think of attached sample patch? Does it address your 
concerns?

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530030#comment-14530030
 ] 

zhangduo commented on HBASE-13628:
--

{code}
for (T element; (element = super.poll()) != null;) {
{code}
This is reported by checkstyle as a 'InnerAssignment' issue.

This is a common style when polling from queue so I think it is fine?

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530031#comment-14530031
 ] 

Srikanth Srungarapu commented on HBASE-13562:
-

Cool! Just want to be sure we aren't missing out on anything.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13579:
---
Fix Version/s: (was: 0.98.13)
   (was: 1.0.1)
   (was: 1.0.0)
   1.2.0
   1.0.2
   0.98.14

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530057#comment-14530057
 ] 

Ashish Singhi commented on HBASE-13562:
---

ok

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530056#comment-14530056
 ] 

Ashish Singhi commented on HBASE-13562:
---

ok

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13632:
---
Attachment: HBASE-13368_branch-1.patch
HBASE-13368_0.98.patch

Patches for 0.98 and branch-1. Will commit this unless objections.

 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-13632:
--

 Summary: Backport HBASE-13368 to branch-1 and 0.98
 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530024#comment-14530024
 ] 

Ashish Singhi commented on HBASE-13562:
---

bq. Also one more thing, in HBASE-13359 we added the missing table owner. 
Haven't pushed the changes to website yet. You might want to factor that in. 
That is been already handled in the existing test cases.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530036#comment-14530036
 ] 

Andrew Purtell commented on HBASE-13628:


Seems fine 

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13579:
---
Attachment: HBASE-13579_0.98_1.patch

Updated 0.98 patch

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530003#comment-14530003
 ] 

Anoop Sam John commented on HBASE-13387:


This will be a big patch. So I am planning to split into multiple tasks and 
make core code ready for accepting BB backed cells.
- Make Tag as an interface impl way
- Deperecate filterRowKey(byte[] buffer, int offset, int length) in favor of 
filterRowKey(Cell firstRowCell)
- Deprecate postScannerFilterRow CP hook with byte[],int,int args in favor of 
taking Cell arg
- Change ColumnTracker methods to pass Cell instead of byte[], int, int for 
column.
- Remove CellComparator#compareRows(byte[], int, int, byte[], int,int)
- CellUtil  - More typed getters from Cell components (Like getRowAsInt, 
getValueAsLong etc)
.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530002#comment-14530002
 ] 

Ashish Singhi commented on HBASE-13562:
---

Even I had the same thought.
But then I saw that all the AC permission checks for namespace where in 
TestNamespaceCommands so thought of maintaining the same.
For example check {{TestAccessController#testTableCreate}} and 
{{TestNamespaceCommands#testCreateTableWithNamespace}}

Now what you would like to suggest me ?

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13428) Migration to hbase-2.0.0

2015-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13428:
---
Priority: Blocker  (was: Major)

 Migration to hbase-2.0.0
 

 Key: HBASE-13428
 URL: https://issues.apache.org/jira/browse/HBASE-13428
 Project: HBase
  Issue Type: Umbrella
  Components: migration
Reporter: stack
Priority: Blocker
 Fix For: 2.0.0


 Opening a 2.0 umbrella migration issue. Lets hang off this one any tools and 
 expectations migrating from 1.0 (or earlier) to 2.0. So far there are none 
 that I know of though there is an expectation in HBASE-13373 that hfiles are 
 at least major version 2 and minor version 3.  Lets list all such 
 expectations, etc., here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13631) Migration from 0.94 to 2.0.0

2015-05-06 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-13631:
--

 Summary: Migration from 0.94 to 2.0.0
 Key: HBASE-13631
 URL: https://issues.apache.org/jira/browse/HBASE-13631
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John


We have HFile V2 (minor version-2) only in 0.94 and 2.0 needs HFile V3 with 
minor version 3 atleast. We can test and document clearly the path of upgrade 
from 94.x to 2.0.0




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530039#comment-14530039
 ] 

zhangduo commented on HBASE-13628:
--

OK, let me commit.

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530042#comment-14530042
 ] 

Srikanth Srungarapu commented on HBASE-13562:
-

Never mind. Can we pursue in the direction suggested by sample patch i.e. using 
a single test for covering all permissions.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13625:
---
Attachment: HBASE-13625-v2.patch

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530113#comment-14530113
 ] 

Aditya Kishore commented on HBASE-13625:


In HFileOutputFormat2, we should probably create the parent of 
{{partitionsPath}}.
bq. fs.mkdirs(partitionsPath.getParent());

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13580) region_mover.rb broken with TypeError: no public constructors for Java::OrgApacheHadoopHbaseClient::HTable

2015-05-06 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-13580:

Attachment: HBASE-13580-v4.patch

Here is the patch fixing some new issues regarding api changes. I have tested 
script on master branch in distributed cluster, build was created about one 
hour ago. All worked as expected. 

 region_mover.rb broken with TypeError: no public constructors for 
 Java::OrgApacheHadoopHbaseClient::HTable
 --

 Key: HBASE-13580
 URL: https://issues.apache.org/jira/browse/HBASE-13580
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
 Environment: x86_64 GNU/Linux
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Attachments: HBASE-13580-v2.patch, HBASE-13580-v3.patch, 
 HBASE-13580-v4.patch, HBASE-13580.patch


 I was testing region_mover.rb on master branch  in distributed cluster and 
 hit this error. I have fixed this by using Connection#getTable instead of 
 HTable but look like this script needs some additional work:
 1. Remove master server from region move targets list
 2. --exclude=FILE option is not  working for me 
 I will try to get this script in functional state if there is no objections ?
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530016#comment-14530016
 ] 

Hadoop QA commented on HBASE-13628:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730698/HBASE-13628.patch
  against master branch at commit 2e132db85c49373b4086f4e4f7b39dcf2972f24f.
  ATTACHMENT ID: 12730698

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1897 checkstyle errors (more than the master's current 1896 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13953//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13953//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13953//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13953//console

This message is automatically generated.

 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13579:
---
Affects Version/s: (was: 2.0.0)
   1.0.0
   1.0.1
Fix Version/s: 1.0.0
   1.0.1
   0.98.13
   1.1.0
   2.0.0

Pushed to branch-1 +

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.0.0, 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530012#comment-14530012
 ] 

ramkrishna.s.vasudevan commented on HBASE-13579:


bq.In reader V2 we don't have Tags. Blindly return NoTagsKeyValue then.
Ya this is the ideal one.  
bq.Why this is added as protected? I can not see this is extended.
This is getting extended in V3.
{code}
+  if (currTagsLen  0) {
+return formKeyValue();
+  } else {
{code}
Now the change would be to formNoTagsKeyValue.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-13579_0.98.patch, HBASE-13579_1.patch, 
 HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13579) Avoid isCellTTLExpired() for NO-TAG cases

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530059#comment-14530059
 ] 

Hadoop QA commented on HBASE-13579:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12730706/HBASE-13579_0.98.patch
  against 0.98 branch at commit 2e132db85c49373b4086f4e4f7b39dcf2972f24f.
  ATTACHMENT ID: 12730706

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
25 warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3832 checkstyle errors (more than the master's current 3830 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestProcessBasedCluster
  org.apache.hadoop.hbase.mapreduce.TestImportExport

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.component.jetty.jettyproducer.JettyHttpProducerConcurrentTest.testNoConcurrentProducers(JettyHttpProducerConcurrentTest.java:47)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13954//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13954//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13954//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13954//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13954//console

This message is automatically generated.

 Avoid isCellTTLExpired() for NO-TAG cases
 -

 Key: HBASE-13579
 URL: https://issues.apache.org/jira/browse/HBASE-13579
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Affects Versions: 1.0.0, 1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.14, 1.0.2, 1.2.0

 Attachments: HBASE-13579_0.98.patch, HBASE-13579_0.98_1.patch, 
 HBASE-13579_1.patch, HBASE-13579_2.patch, HBASE-13579_KVExtension.patch, 
 HBASE-13579_branch-1.patch, HBASE-13579_branch-1_1.patch, 
 HBASE-13579_storelevel.patch


 As observed in this JIRA's performance test, we are always calling the 
 isCellTTLExpired() for every cell and internally it is parsing the keyLength, 
 valueLength() to get the tagsLength after which we decide whether Cell level 
 TTL is present are not.
 This JIRA aims to avoid this check if all the readers of the storescanner 
 knows that there are no tags to read.  Note that, for the memstore scanner we 
 will do that in another JIRA, which I suppose Stack had already raised to 
 avoid tag length while flushing (for the NO-TAG) case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13633) draining_servers.rb broken with NoMethodError: undefined method `getServerInfo

2015-05-06 Thread Samir Ahmic (JIRA)
Samir Ahmic created HBASE-13633:
---

 Summary: draining_servers.rb broken with NoMethodError: undefined 
method `getServerInfo
 Key: HBASE-13633
 URL: https://issues.apache.org/jira/browse/HBASE-13633
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13368) Hash.java is declared as public Interface - but it should be Private

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530401#comment-14530401
 ] 

Hudson commented on HBASE-13368:


SUCCESS: Integrated in HBase-1.2 #62 (See 
[https://builds.apache.org/job/HBase-1.2/62/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
ad8f1d076f05d0c2a47bdababa1c5c3f0fe9e756)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Hash.java is declared as public Interface - but it should be Private
 

 Key: HBASE-13368
 URL: https://issues.apache.org/jira/browse/HBASE-13368
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13368.patch, HBASE-13368_1.patch


 Currently Hash.java is marked as public.  But we are not allowing the user to 
 configure his own Hash.java impl using FQCN.  It is currently working as an 
 enum based type.  
 So this class should be an Private interface and not a direct user facing 
 interface. Thanks to Anoop for confirming on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530399#comment-14530399
 ] 

Hudson commented on HBASE-13632:


SUCCESS: Integrated in HBase-1.2 #62 (See 
[https://builds.apache.org/job/HBase-1.2/62/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
ad8f1d076f05d0c2a47bdababa1c5c3f0fe9e756)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530400#comment-14530400
 ] 

Hudson commented on HBASE-13628:


SUCCESS: Integrated in HBase-1.2 #62 (See 
[https://builds.apache.org/job/HBase-1.2/62/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev a64b3da63bf7937baa7221e3c513b9be4fbcc702)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530429#comment-14530429
 ] 

Hadoop QA commented on HBASE-13448:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730766/HBASE-13448_V2.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730766

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13959//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13959//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13959//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13959//console

This message is automatically generated.

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: HBASE-13448.patch, HBASE-13448_V2.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530335#comment-14530335
 ] 

Hudson commented on HBASE-13628:


FAILURE: Integrated in HBase-0.98 #979 (See 
[https://builds.apache.org/job/HBase-0.98/979/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev d73b88f7f356e636ed257f09d9fd5bd835195bce)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13368) Hash.java is declared as public Interface - but it should be Private

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530360#comment-14530360
 ] 

Hudson commented on HBASE-13368:


SUCCESS: Integrated in HBase-1.0 #904 (See 
[https://builds.apache.org/job/HBase-1.0/904/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
a3322f7f26b1d557f6de334a10add639b1a70af8)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Hash.java is declared as public Interface - but it should be Private
 

 Key: HBASE-13368
 URL: https://issues.apache.org/jira/browse/HBASE-13368
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13368.patch, HBASE-13368_1.patch


 Currently Hash.java is marked as public.  But we are not allowing the user to 
 configure his own Hash.java impl using FQCN.  It is currently working as an 
 enum based type.  
 So this class should be an Private interface and not a direct user facing 
 interface. Thanks to Anoop for confirming on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530359#comment-14530359
 ] 

Hudson commented on HBASE-13632:


SUCCESS: Integrated in HBase-1.0 #904 (See 
[https://builds.apache.org/job/HBase-1.0/904/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
a3322f7f26b1d557f6de334a10add639b1a70af8)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13633) draining_servers.rb broken with NoMethodError: undefined method `getServerInfo

2015-05-06 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-13633:

Attachment: HBASE-13633.patch

Here is simple patch.

 draining_servers.rb broken with NoMethodError: undefined method `getServerInfo
 --

 Key: HBASE-13633
 URL: https://issues.apache.org/jira/browse/HBASE-13633
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0

 Attachments: HBASE-13633.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530405#comment-14530405
 ] 

Hudson commented on HBASE-13632:


SUCCESS: Integrated in HBase-1.1 #469 (See 
[https://builds.apache.org/job/HBase-1.1/469/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
5422fb87ff0fad5f46240fba0d132a5e2fa02ccc)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java


 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13368) Hash.java is declared as public Interface - but it should be Private

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530406#comment-14530406
 ] 

Hudson commented on HBASE-13368:


SUCCESS: Integrated in HBase-1.1 #469 (See 
[https://builds.apache.org/job/HBase-1.1/469/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
5422fb87ff0fad5f46240fba0d132a5e2fa02ccc)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java


 Hash.java is declared as public Interface - but it should be Private
 

 Key: HBASE-13368
 URL: https://issues.apache.org/jira/browse/HBASE-13368
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13368.patch, HBASE-13368_1.patch


 Currently Hash.java is marked as public.  But we are not allowing the user to 
 configure his own Hash.java impl using FQCN.  It is currently working as an 
 enum based type.  
 So this class should be an Private interface and not a direct user facing 
 interface. Thanks to Anoop for confirming on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530415#comment-14530415
 ] 

Hadoop QA commented on HBASE-13625:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730765/HBASE-13625-v2.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730765

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13958//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13958//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13958//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13958//console

This message is automatically generated.

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 

[jira] [Updated] (HBASE-13606) AssignmentManager.assign() is not sync in both path

2015-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-13606:

Attachment: HBASE-13606-v3.patch

 AssignmentManager.assign() is not sync in both path
 ---

 Key: HBASE-13606
 URL: https://issues.apache.org/jira/browse/HBASE-13606
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13606-v0.patch, HBASE-13606-v1-branch-1.patch, 
 HBASE-13606-v1.patch, HBASE-13606-v2-branch-1.patch, HBASE-13606-v2.patch, 
 HBASE-13606-v3.patch, 
 TEST-org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure.xml


 from the comment and the expected behavior AssignmentManager.assign() should 
 be sync
 {code}
 /** Assigns specified regions round robin, if any.
  * This is a synchronous call and will return once every region has been
 public void assign(ListHRegionInfo regions)
 {code}
 but the code has two path. 1 sync and the async
 {code}
 if (servers == 1 || (regions  bulkAssignThresholdRegions
  servers  bulkAssignThresholdServers)) {
for (HRegionInfo region: plan.getValue()) {
  ...
 invokeAssign(region);  // -- this is async threadPool.submit(assign)
  ...
   }
 } else {
   BulkAssigner ba = new GeneralBulkAssigner(...);
   ba.bulkAssign();  // -- this is sync, calls BulkAssign.waitUntilDone()
 }
 {code}
 https://builds.apache.org/job/HBase-1.1/452/ TestCreateTableProcedure is 
 flaky because of this async behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530528#comment-14530528
 ] 

Ted Yu commented on HBASE-13625:


+1 on v2.

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530593#comment-14530593
 ] 

Stephen Yuan Jiang commented on HBASE-13625:


[~adityakishore], if you are referring the Unit test failures, the issue was 
that some specific UT tried to access Local Filesystem (while the code expected 
DFS).  The V2 patch fixed the issue.  If you think in general that we should 
create parent, actually the code already done that if parent does not exist.



 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13625:
---
Fix Version/s: 1.1.1
   1.2.0
   2.0.0
 Hadoop Flags: Reviewed

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13633) draining_servers.rb broken with NoMethodError: undefined method `getServerInfo

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530629#comment-14530629
 ] 

Hadoop QA commented on HBASE-13633:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730809/HBASE-13633.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730809

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev-support patch that doesn't require tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportExport
  org.apache.hadoop.hbase.util.TestProcessBasedCluster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13961//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13961//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13961//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13961//console

This message is automatically generated.

 draining_servers.rb broken with NoMethodError: undefined method `getServerInfo
 --

 Key: HBASE-13633
 URL: https://issues.apache.org/jira/browse/HBASE-13633
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0

 Attachments: HBASE-13633.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13634) Unsafe reference equality checks to EMPTY_START_ROW

2015-05-06 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530646#comment-14530646
 ] 

Dave Latham commented on HBASE-13634:
-

Not sure who would be interested, but perhaps [~sershe] would be.

 Unsafe reference equality checks to EMPTY_START_ROW
 ---

 Key: HBASE-13634
 URL: https://issues.apache.org/jira/browse/HBASE-13634
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Scanners
Reporter: Dave Latham

 While looking to see if there was a standard method in the code base for 
 testing for the empty start and end row, I noticed some cases that are using 
 unsafe reference equality checks and thus may have incorrect in boundary 
 cases:
 ScanQueryMatcher.checkPartialDropDeleteRange
 StripeStoreFileManager.findStripeForRow
 It looks like both are intended to support stripe compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13634) Unsafe reference equality checks to EMPTY_START_ROW

2015-05-06 Thread Dave Latham (JIRA)
Dave Latham created HBASE-13634:
---

 Summary: Unsafe reference equality checks to EMPTY_START_ROW
 Key: HBASE-13634
 URL: https://issues.apache.org/jira/browse/HBASE-13634
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Scanners
Reporter: Dave Latham


While looking to see if there was a standard method in the code base for 
testing for the empty start and end row, I noticed some cases that are using 
unsafe reference equality checks and thus may have incorrect in boundary cases:

ScanQueryMatcher.checkPartialDropDeleteRange
StripeStoreFileManager.findStripeForRow

It looks like both are intended to support stripe compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13634) Unsafe reference equality checks to EMPTY_START_ROW

2015-05-06 Thread Dave Latham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Latham updated HBASE-13634:

Description: 
While looking to see if there was a standard method in the code base for 
testing for the empty start and end row, I noticed some cases that are using 
unsafe reference equality checks and thus may have incorrect behavior in 
boundary cases:

ScanQueryMatcher.checkPartialDropDeleteRange
StripeStoreFileManager.findStripeForRow

It looks like both are intended to support stripe compaction

  was:
While looking to see if there was a standard method in the code base for 
testing for the empty start and end row, I noticed some cases that are using 
unsafe reference equality checks and thus may have incorrect in boundary cases:

ScanQueryMatcher.checkPartialDropDeleteRange
StripeStoreFileManager.findStripeForRow

It looks like both are intended to support stripe compaction


 Unsafe reference equality checks to EMPTY_START_ROW
 ---

 Key: HBASE-13634
 URL: https://issues.apache.org/jira/browse/HBASE-13634
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Scanners
Reporter: Dave Latham

 While looking to see if there was a standard method in the code base for 
 testing for the empty start and end row, I noticed some cases that are using 
 unsafe reference equality checks and thus may have incorrect behavior in 
 boundary cases:
 ScanQueryMatcher.checkPartialDropDeleteRange
 StripeStoreFileManager.findStripeForRow
 It looks like both are intended to support stripe compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530628#comment-14530628
 ] 

Ted Yu commented on HBASE-13625:


[~ndimiduk]:
FYI

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13632) Backport HBASE-13368 to branch-1 and 0.98

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530643#comment-14530643
 ] 

Hudson commented on HBASE-13632:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #932 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/932/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
38cd559456aa93128743e02a00922f4cf3e95f95)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Backport HBASE-13368 to branch-1 and 0.98
 -

 Key: HBASE-13632
 URL: https://issues.apache.org/jira/browse/HBASE-13632
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 1.1.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13368_0.98.patch, HBASE-13368_branch-1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13368) Hash.java is declared as public Interface - but it should be Private

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530645#comment-14530645
 ] 

Hudson commented on HBASE-13368:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #932 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/932/])
HBASE-13632 -  Backport HBASE-13368 to branch-1 and 0.98 (Ram) (ramkrishna: rev 
38cd559456aa93128743e02a00922f4cf3e95f95)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 Hash.java is declared as public Interface - but it should be Private
 

 Key: HBASE-13368
 URL: https://issues.apache.org/jira/browse/HBASE-13368
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13368.patch, HBASE-13368_1.patch


 Currently Hash.java is marked as public.  But we are not allowing the user to 
 configure his own Hash.java impl using FQCN.  It is currently working as an 
 enum based type.  
 So this class should be an Private interface and not a direct user facing 
 interface. Thanks to Anoop for confirming on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13628) Use AtomicLong as size in BoundedConcurrentLinkedQueue

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530644#comment-14530644
 ] 

Hudson commented on HBASE-13628:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #932 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/932/])
HBASE-13628 Use AtomicLong as size in BoundedConcurrentLinkedQueue (zhangduo: 
rev d73b88f7f356e636ed257f09d9fd5bd835195bce)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedConcurrentLinkedQueue.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedConcurrentLinkedQueue.java


 Use AtomicLong as size in BoundedConcurrentLinkedQueue
 --

 Key: HBASE-13628
 URL: https://issues.apache.org/jira/browse/HBASE-13628
 Project: HBase
  Issue Type: Bug
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13628.patch


 Remove the high priority findbugs warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-1989) Admin (et al.) not accurate with Column vs. Column-Family usage

2015-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530657#comment-14530657
 ] 

Anoop Sam John commented on HBASE-1989:
---

Latest patch LGTM

 Admin (et al.) not accurate with Column vs. Column-Family usage
 ---

 Key: HBASE-1989
 URL: https://issues.apache.org/jira/browse/HBASE-1989
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.20.1, 0.90.1
Reporter: Doug Meil
Assignee: Lars Francke
Priority: Minor
 Attachments: HBASE-1989-v1.patch, HBASE-1989.patch, hbase1989.patch


 Consider the classes Admin and HColumnDescriptor.
 HColumnDescriptor is really referring to a column family and not a column 
 (i.e., family:qualifer).
 Likewise, in Admin there is a method called addColumn that takes an 
 HColumnDescriptor instance.
 I labeled this a bug in the sense that it produces conceptual confusion 
 because there is a big difference between a column and column-family in HBase 
 and these terms should be used consistently.  The code works, though.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-1989) Admin (et al.) not accurate with Column vs. Column-Family usage

2015-05-06 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-1989:

Status: Patch Available  (was: Open)

 Admin (et al.) not accurate with Column vs. Column-Family usage
 ---

 Key: HBASE-1989
 URL: https://issues.apache.org/jira/browse/HBASE-1989
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.90.1, 0.20.1
Reporter: Doug Meil
Assignee: Lars Francke
Priority: Minor
 Attachments: HBASE-1989-v1.patch, HBASE-1989.patch, hbase1989.patch


 Consider the classes Admin and HColumnDescriptor.
 HColumnDescriptor is really referring to a column family and not a column 
 (i.e., family:qualifer).
 Likewise, in Admin there is a method called addColumn that takes an 
 HColumnDescriptor instance.
 I labeled this a bug in the sense that it produces conceptual confusion 
 because there is a big difference between a column and column-family in HBase 
 and these terms should be used consistently.  The code works, though.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530478#comment-14530478
 ] 

Hadoop QA commented on HBASE-13562:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730788/HBASE-13562-v1.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730788

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+verifyAllowed(listTablesAction, SUPERUSER, USER_GLOBAL_ALL, 
USER_CREATE, USER_OWNER, TABLE_ADMIN);
+verifyAllowed(getTableDescAction, SUPERUSER, USER_GLOBAL_ALL, USER_CREATE, 
USER_OWNER, TABLE_ADMIN);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13960//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13960//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13960//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13960//console

This message is automatically generated.

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562-v1.patch, HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13633) draining_servers.rb broken with NoMethodError: undefined method `getServerInfo

2015-05-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530536#comment-14530536
 ] 

Ted Yu commented on HBASE-13633:


+1

 draining_servers.rb broken with NoMethodError: undefined method `getServerInfo
 --

 Key: HBASE-13633
 URL: https://issues.apache.org/jira/browse/HBASE-13633
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0

 Attachments: HBASE-13633.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13633) draining_servers.rb broken with NoMethodError: undefined method `getServerInfo

2015-05-06 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-13633:

Status: Patch Available  (was: Open)

 draining_servers.rb broken with NoMethodError: undefined method `getServerInfo
 --

 Key: HBASE-13633
 URL: https://issues.apache.org/jira/browse/HBASE-13633
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0
Reporter: Samir Ahmic
Assignee: Samir Ahmic
 Fix For: 2.0.0

 Attachments: HBASE-13633.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-1989) Admin (et al.) not accurate with Column vs. Column-Family usage

2015-05-06 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-1989:

Attachment: HBASE-1989-v1.patch

Thanks for taking a look and the comments.

I've attached v1 of the patch that should fix the Checkstyle warning and 
addresses your comments. I've renamed all parameters I could find in those two 
classes to {{columnFamily}} and I've removed those extra methods I added.

 Admin (et al.) not accurate with Column vs. Column-Family usage
 ---

 Key: HBASE-1989
 URL: https://issues.apache.org/jira/browse/HBASE-1989
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.20.1, 0.90.1
Reporter: Doug Meil
Assignee: Lars Francke
Priority: Minor
 Attachments: HBASE-1989-v1.patch, HBASE-1989.patch, hbase1989.patch


 Consider the classes Admin and HColumnDescriptor.
 HColumnDescriptor is really referring to a column family and not a column 
 (i.e., family:qualifer).
 Likewise, in Admin there is a method called addColumn that takes an 
 HColumnDescriptor instance.
 I labeled this a bug in the sense that it produces conceptual confusion 
 because there is a big difference between a column and column-family in HBase 
 and these terms should be used consistently.  The code works, though.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-1989) Admin (et al.) not accurate with Column vs. Column-Family usage

2015-05-06 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-1989:

Status: Open  (was: Patch Available)

 Admin (et al.) not accurate with Column vs. Column-Family usage
 ---

 Key: HBASE-1989
 URL: https://issues.apache.org/jira/browse/HBASE-1989
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.90.1, 0.20.1
Reporter: Doug Meil
Assignee: Lars Francke
Priority: Minor
 Attachments: HBASE-1989-v1.patch, HBASE-1989.patch, hbase1989.patch


 Consider the classes Admin and HColumnDescriptor.
 HColumnDescriptor is really referring to a column family and not a column 
 (i.e., family:qualifer).
 Likewise, in Admin there is a method called addColumn that takes an 
 HColumnDescriptor instance.
 I labeled this a bug in the sense that it produces conceptual confusion 
 because there is a big difference between a column and column-family in HBase 
 and these terms should be used consistently.  The code works, though.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13634) Unsafe reference equality checks to EMPTY_START_ROW

2015-05-06 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-13634:
-
Attachment: HBASE-13634.patch

I've done a quick patch that fixes all unsafe reference checks with Arrays I 
could find. I have to admit that I did not spend much time trying to understand 
the code to see if any of them might have been intentional.

Let's see what Jenkins has to say about this...

 Unsafe reference equality checks to EMPTY_START_ROW
 ---

 Key: HBASE-13634
 URL: https://issues.apache.org/jira/browse/HBASE-13634
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Scanners
Reporter: Dave Latham
 Attachments: HBASE-13634.patch


 While looking to see if there was a standard method in the code base for 
 testing for the empty start and end row, I noticed some cases that are using 
 unsafe reference equality checks and thus may have incorrect behavior in 
 boundary cases:
 ScanQueryMatcher.checkPartialDropDeleteRange
 StripeStoreFileManager.findStripeForRow
 It looks like both are intended to support stripe compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13634) Unsafe reference equality checks to EMPTY_START_ROW

2015-05-06 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HBASE-13634:
-
Status: Patch Available  (was: Open)

 Unsafe reference equality checks to EMPTY_START_ROW
 ---

 Key: HBASE-13634
 URL: https://issues.apache.org/jira/browse/HBASE-13634
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Scanners
Reporter: Dave Latham
 Attachments: HBASE-13634.patch


 While looking to see if there was a standard method in the code base for 
 testing for the empty start and end row, I noticed some cases that are using 
 unsafe reference equality checks and thus may have incorrect behavior in 
 boundary cases:
 ScanQueryMatcher.checkPartialDropDeleteRange
 StripeStoreFileManager.findStripeForRow
 It looks like both are intended to support stripe compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13606) AssignmentManager.assign() is not sync in both path

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14530822#comment-14530822
 ] 

Hadoop QA commented on HBASE-13606:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12730826/HBASE-13606-v3.patch
  against master branch at commit 652929c0ff8c8cec1e86ded834f3e770422b2ace.
  ATTACHMENT ID: 12730826

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13963//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13963//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13963//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13963//console

This message is automatically generated.

 AssignmentManager.assign() is not sync in both path
 ---

 Key: HBASE-13606
 URL: https://issues.apache.org/jira/browse/HBASE-13606
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13606-v0.patch, HBASE-13606-v1-branch-1.patch, 
 HBASE-13606-v1.patch, HBASE-13606-v2-branch-1.patch, HBASE-13606-v2.patch, 
 HBASE-13606-v3.patch, 
 TEST-org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure.xml


 from the comment and the expected behavior AssignmentManager.assign() should 
 be sync
 {code}
 /** Assigns specified regions round robin, if any.
  * This is a synchronous call and will return once every region has been
 public void assign(ListHRegionInfo regions)
 {code}
 but the code has two path. 1 sync and the async
 {code}
 if (servers == 1 || (regions  bulkAssignThresholdRegions
  servers  bulkAssignThresholdServers)) {
for (HRegionInfo region: plan.getValue()) {
  ...
 invokeAssign(region);  // -- this is async threadPool.submit(assign)
  ...
   }
 } else {
   BulkAssigner ba = new GeneralBulkAssigner(...);
   ba.bulkAssign();  // -- this is sync, calls BulkAssign.waitUntilDone()
 }
 {code}
 https://builds.apache.org/job/HBase-1.1/452/ TestCreateTableProcedure is 
 flaky because of this async behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13625) Use HDFS for HFileOutputFormat2 partitioner's path

2015-05-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13625:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Use HDFS for HFileOutputFormat2 partitioner's path
 --

 Key: HBASE-13625
 URL: https://issues.apache.org/jira/browse/HBASE-13625
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13625-v2.patch, HBASE-13625.patch


 HBASE-13010 changed hard-coded '/tmp' in HFileOutputFormat2 partitioner's 
 path to 'hadoop.tmp.dir'.  This breaks unit test in Windows.
 {code}
static void configurePartitioner(Job job, ListImmutableBytesWritable 
 splitPoints)
  ...
  // create the partitions file
 -FileSystem fs = FileSystem.get(job.getConfiguration());
 -Path partitionsPath = new Path(/tmp, partitions_ + 
 UUID.randomUUID());
 +FileSystem fs = FileSystem.get(conf);
 +Path partitionsPath = new Path(conf.get(hadoop.tmp.dir), partitions_ 
 + UUID.randomUUID());
 {code}
 Here is the exception from 1 of the UTs when running against Windows (from 
 branch-1.1) - The ':' is an invalid character in windows file path:
 {code}
 java.lang.IllegalArgumentException: Pathname 
 /C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  from 
 C:/hbase-server/target/test-data/d25e2228-8959-43ee-b413-4fa69cdb8032/hadoop_tmp/partitions_fb96c0a0-41e6-4964-a391-738cb761ee3e
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
   at 
 org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1074)
   at 
 org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1374)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:297)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:593)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
   at 
 org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
   at 
 org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:539)
   at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:720)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:313)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.testBulkOutputWithoutAnExistingTable(TestImportTsv.java:168)
 {code}
 The proposed fix is to use a config to point to a hdfs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13562) Expand AC testing coverage to include all combinations of scope and permissions.

2015-05-06 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531510#comment-14531510
 ] 

Srikanth Srungarapu commented on HBASE-13562:
-

Do you mind uploading the patch to reviewboard?

 Expand AC testing coverage to include all combinations of scope and 
 permissions.
 

 Key: HBASE-13562
 URL: https://issues.apache.org/jira/browse/HBASE-13562
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Ashish Singhi
 Attachments: HBASE-13562-v1.patch, HBASE-13562.patch, sample.patch


 As of now, the tests in TestAccessController and TestAccessController2 
 doesn't cover all the combinations of Scope and Permissions. Ideally, we 
 should have testing coverage for the entire [ACL 
 matrix|https://hbase.apache.org/book/appendix_acl_matrix.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13468) hbase.zookeeper.quorum ipv6 address

2015-05-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531527#comment-14531527
 ] 

Enis Soztutar commented on HBASE-13468:
---

bq. I am trying to find a way to checkout and fix this issue. I didn't find a 
clear way to contribute from here
You can read through the section in the book: 
https://hbase.apache.org/book.html#submitting.patches. Shortly, creating a 
patch and attaching it here is the easiest. 

 hbase.zookeeper.quorum ipv6 address
 ---

 Key: HBASE-13468
 URL: https://issues.apache.org/jira/browse/HBASE-13468
 Project: HBase
  Issue Type: Bug
Reporter: Mingtao Zhang

 I put ipv6 address in hbase.zookeeper.quorum, by the time this string went to 
 zookeeper code, the address is messed up, i.e. only '[1234' left. 
 I started using pseudo mode with embedded zk = true.
 I downloaded 1.0.0, not sure which affected version should be here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13638) Put copy constructor is shallow

2015-05-06 Thread Dave Latham (JIRA)
Dave Latham created HBASE-13638:
---

 Summary: Put copy constructor is shallow
 Key: HBASE-13638
 URL: https://issues.apache.org/jira/browse/HBASE-13638
 Project: HBase
  Issue Type: Bug
Reporter: Dave Latham


The Put copy constructor ends up with Puts sharing Lists of Cells for the same 
family.  Adding a Cell to the copied Put affects the original Put also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13576) HBCK enhancement: Failure in checking one region should not fail the entire HBCK operation.

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531624#comment-14531624
 ] 

Hudson commented on HBASE-13576:


SUCCESS: Integrated in HBase-TRUNK #6462 (See 
[https://builds.apache.org/job/HBase-TRUNK/6462/])
HBASE-13576 HBCK enhancement: Failure in checking one region should not fail 
the entire HBCK operation. (Stephen Yuan Jiang) (enis: rev 
11b76732c0ec80a2cccbe7937440bd107e577c8b)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 HBCK enhancement: Failure in checking one region should not fail the entire 
 HBCK operation.
 ---

 Key: HBASE-13576
 URL: https://issues.apache.org/jira/browse/HBASE-13576
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13576.v1-master.patch, 
 HBASE-13576.v2-master.patch, HBASE-13576.v3-master.patch


 HBaseFsck#checkRegionConsistency() checks region consistency and repair the 
 corruption if requested.  However, this function expects some exceptions.  
 For example, in one aspect of region repair, it calls 
 HBaseFsckRepair#waitUntilAssigned(), if a region is in transition for over 
 120 seconds (default value of hbase.hbck.assign.timeout configuration), 
 IOException would throw.
 The problem is that one exception in checkRegionConsistency() would kill 
 entire hbck operation, because the exception would propagate up.
 The proposal is that if the region is not META region ( or a system table 
 region if we prefer),  we can skip the region if  
 HBaseFsck#checkRegionConsistency() fails.  We could print out skip regions in 
 summary section so that users know to either re-run or investigate potential 
 issue for that region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13576) HBCK enhancement: Failure in checking one region should not fail the entire HBCK operation.

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14531597#comment-14531597
 ] 

Hudson commented on HBASE-13576:


SUCCESS: Integrated in HBase-1.1 #472 (See 
[https://builds.apache.org/job/HBase-1.1/472/])
HBASE-13576 HBCK enhancement: Failure in checking one region should not fail 
the entire HBCK operation. (Stephen Yuan Jiang) (enis: rev 
31ff3e75860bff35b638a7e5b88ea2d959212063)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


 HBCK enhancement: Failure in checking one region should not fail the entire 
 HBCK operation.
 ---

 Key: HBASE-13576
 URL: https://issues.apache.org/jira/browse/HBASE-13576
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13576.v1-master.patch, 
 HBASE-13576.v2-master.patch, HBASE-13576.v3-master.patch


 HBaseFsck#checkRegionConsistency() checks region consistency and repair the 
 corruption if requested.  However, this function expects some exceptions.  
 For example, in one aspect of region repair, it calls 
 HBaseFsckRepair#waitUntilAssigned(), if a region is in transition for over 
 120 seconds (default value of hbase.hbck.assign.timeout configuration), 
 IOException would throw.
 The problem is that one exception in checkRegionConsistency() would kill 
 entire hbck operation, because the exception would propagate up.
 The proposal is that if the region is not META region ( or a system table 
 region if we prefer),  we can skip the region if  
 HBaseFsck#checkRegionConsistency() fails.  We could print out skip regions in 
 summary section so that users know to either re-run or investigate potential 
 issue for that region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13616) Move ServerShutdownHandler to Pv2

2015-05-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13616:
--
Attachment: 13616.wip.txt

WIP

Redoes ServerShutdownHandler as ServerCrashProcedure.  SCP does SSH and 
MetaSSH. Also does the DLR LogReplayHandler (This patch removes MetaSSH, SSH, 
and LogReplayHandler).

TODO: Add some facility to Procedure so only one Procedure per crashed server. 
Also add priority so server that was carrying meta gets serviced first. I think 
I also want all crashed servers to be processed in lock-step rather than try 
and run each to its finish to avoid a procedure that is late in its processing 
stuck because its waiting on a recent crash to do its first steps assigning 
regions.

 Move ServerShutdownHandler to Pv2
 -

 Key: HBASE-13616
 URL: https://issues.apache.org/jira/browse/HBASE-13616
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2
Affects Versions: 1.1.0
Reporter: stack
Assignee: stack
 Attachments: 13616.wip.txt


 Move ServerShutdownHandler to run on ProcedureV2. Need this for DLR to work. 
 See HBASE-13567.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >