[jira] [Commented] (HBASE-13199) Some small improvements on canary tool

2015-03-18 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366820#comment-14366820
 ] 

Liu Shaohui commented on HBASE-13199:
-

OK. Thanks

 Some small improvements on canary tool
 --

 Key: HBASE-13199
 URL: https://issues.apache.org/jira/browse/HBASE-13199
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Liu Shaohui
Assignee: Liu Shaohui
 Fix For: 2.0.0

 Attachments: HBASE-13199-v1.diff, HBASE-13199-v2.diff, 
 HBASE-13199-v3.diff, HBASE-13199-v4.diff


 Improvements
 - Make the sniff of region and regionserver parallel to support large cluster 
 with 1+ region and 500+ regionservers using thread pool.
 - Set cacheblock to false in get and scan to avoid influence to block cache.
 - Add FirstKeyOnlyFilter to get and scan to avoid read and translate too many 
 data from HBase. There may be many column under a column family in a 
 flat-wide table.
  - Select the region randomly when sniffing a regionserver.
  - Make the sink class of canary configurable
 [~stack]
 Suggestions are welcomed. Thanks~
 Another question is that why to check each column family with separate 
 requests when sniffing a region? Can we just check a  column family of a 
 region?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366897#comment-14366897
 ] 

Matteo Bertozzi commented on HBASE-13241:
-

I see only in one place the assert on the scan result
{code}
+ Scan s1 = new Scan();
+ try (ResultScanner scanner1 = table.getScanner(s1);) {
+   Result[] next1 = scanner1.next(5);
+   assertTrue(next1.length == 3);
+ }
{code}

all the other checks seem to just verify if the AccessDeniedException was 
received or not, so verifyAllowed()/verifyDenied() should be enough. if not 
why? what is the difference with the other scanAction we have already?
{code}
+ try (ResultScanner scanner1 = table.getScanner(s1);) {
+   fail(Access should be denied as the user  + USER1_TESTGROUP_QUALIFIER
+   +  read privilege has been revoked on column family qualifier 
+   + Bytes.toString(TEST_FAMILY) + ':' + Bytes.toString(Q1));
+ } catch (AccessDeniedException ignore) {
+ }
{code}



 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11425) Cell/DBB end-to-end on the read-path

2015-03-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11425:
---
Attachment: HBASE-11425-E2E-NotComplete.patch

Attaching an E2E patch for reference. Still some more cleanups we are doing.  
Also avoiding some code duplication still in patch.

 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11425-E2E-NotComplete.patch, Offheap reads in 
 HBase using BBs_V2.pdf, Offheap reads in HBase using BBs_final.pdf


 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366957#comment-14366957
 ] 

Ashish Singhi commented on HBASE-13241:
---

Thanks [~mbertozzi] for taking a look.
bq. I see only in one place the assert on the scan result
We have it at three different places with different values expected.
1. {code}
+ Scan s1 = new Scan();
+  try (ResultScanner scanner1 = table.getScanner(s1);) {
+Result[] next1 = scanner1.next(5);
+assertTrue(next1.length == 3);
+  }
{code}
2. {code}
+ Scan s1 = new Scan();
+  try (ResultScanner scanner1 = table.getScanner(s1);) {
+Result[] next1 = scanner1.next(5);
+assertTrue(next1.length == 2);
+  }
{code}
 3. {code}
+ Scan s1 = new Scan();
+  try (ResultScanner scanner1 = table.getScanner(s1);) {
+Result[] next1 = scanner1.next(5);
+assertTrue(next1.length == 1);
+  }
{code}

bq. all the other checks seem to just verify if the AccessDeniedException was 
received or not, so verifyAllowed()/verifyDenied() should be enough. if not why?
I tried that way when [~srikanth235] offline suggested me, but here at each 
level we have different results.
Like when we grant a group, table level access then a user from it can perform 
scan at family level also but its not the same when we grant a group, access at 
qualifier level. So I will have to create so many actions for it to have it in 
one test which I did some what in my first patch but [~busbey] had some other 
thought and I felt it was reasonable, so I broke this test at different levels. 
Also verifyAllowed() and verifyDenied() internally uses user.runAs api.

bq. what is the difference with the other scanAction we have already?
If you are pointing at scanAction in TestAccessController#testRead then here we 
are not asserting scan result, we are checking whether user with READ access 
are able to scan the table or not.

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366796#comment-14366796
 ] 

Hadoop QA commented on HBASE-13241:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705281/HBASE-13241-v5.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705281

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13292//console

This message is automatically generated.

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11425) Cell/DBB end-to-end on the read-path

2015-03-18 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11425:
---
Attachment: Offheap reads in HBase using BBs_V2.pdf

 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: Offheap reads in HBase using BBs_V2.pdf, Offheap reads 
 in HBase using BBs_final.pdf


 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12636) Avoid too many write operations on zookeeper in replication

2015-03-18 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366894#comment-14366894
 ] 

Liu Shaohui commented on HBASE-12636:
-

[~lhofhansl] [~stack]
Any suggestions about this patch?
The write operations to zookeeper from replication decrease to several hundreds 
from 5 thousands per second in our cluster with this patch.

 Avoid too many write operations on zookeeper in replication
 ---

 Key: HBASE-12636
 URL: https://issues.apache.org/jira/browse/HBASE-12636
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
  Labels: replication
 Fix For: 1.1.0

 Attachments: HBASE-12635-v2.diff, HBASE-12636-v1.diff


 In our production cluster, we found there are about over 1k write operations 
 per second on zookeeper from hbase replication. The reason is that the 
 replication source will write the log position to zookeeper for every edit 
 shipping. If the current replicating WAL is just the WAL that regionserver is 
 writing to,  each skipping will be very small but the frequency is very high, 
 which causes many write operations on zookeeper.
 A simple solution is that writing log position to zookeeper when position 
 diff or skipped edit number is larger than a threshold, not every  edit 
 shipping.
 Suggestions are welcomed, thx~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13071) Hbase Streaming Scan Feature

2015-03-18 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
Attachment: HBASE-13071_trunk_10.patch

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 99.eshcar.png, HBASE-13071_98_1.patch, 
 HBASE-13071_trunk_1.patch, HBASE-13071_trunk_10.patch, 
 HBASE-13071_trunk_10.patch, HBASE-13071_trunk_2.patch, 
 HBASE-13071_trunk_3.patch, HBASE-13071_trunk_4.patch, 
 HBASE-13071_trunk_5.patch, HBASE-13071_trunk_6.patch, 
 HBASE-13071_trunk_7.patch, HBASE-13071_trunk_8.patch, 
 HBASE-13071_trunk_9.patch, HBaseStreamingScanDesign.pdf, 
 HbaseStreamingScanEvaluation.pdf, 
 HbaseStreamingScanEvaluationwithMultipleClients.pdf, gc.eshcar.png, 
 hits.eshcar.png, network.png


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13071) Hbase Streaming Scan Feature

2015-03-18 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
Attachment: (was: HBASE-13071_trunk_10.patch)

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 99.eshcar.png, HBASE-13071_98_1.patch, 
 HBASE-13071_trunk_1.patch, HBASE-13071_trunk_10.patch, 
 HBASE-13071_trunk_2.patch, HBASE-13071_trunk_3.patch, 
 HBASE-13071_trunk_4.patch, HBASE-13071_trunk_5.patch, 
 HBASE-13071_trunk_6.patch, HBASE-13071_trunk_7.patch, 
 HBASE-13071_trunk_8.patch, HBASE-13071_trunk_9.patch, 
 HBaseStreamingScanDesign.pdf, HbaseStreamingScanEvaluation.pdf, 
 HbaseStreamingScanEvaluationwithMultipleClients.pdf, gc.eshcar.png, 
 hits.eshcar.png, network.png


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367141#comment-14367141
 ] 

Hadoop QA commented on HBASE-13071:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12705332/HBASE-13071_trunk_10.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705332

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13294//console

This message is automatically generated.

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 99.eshcar.png, HBASE-13071_98_1.patch, 
 HBASE-13071_trunk_1.patch, HBASE-13071_trunk_10.patch, 
 HBASE-13071_trunk_2.patch, HBASE-13071_trunk_3.patch, 
 HBASE-13071_trunk_4.patch, HBASE-13071_trunk_5.patch, 
 HBASE-13071_trunk_6.patch, HBASE-13071_trunk_7.patch, 
 HBASE-13071_trunk_8.patch, HBASE-13071_trunk_9.patch, 
 HBaseStreamingScanDesign.pdf, HbaseStreamingScanEvaluation.pdf, 
 HbaseStreamingScanEvaluationwithMultipleClients.pdf, gc.eshcar.png, 
 hits.eshcar.png, network.png


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will 

[jira] [Commented] (HBASE-13090) Progress heartbeats for long running scanners

2015-03-18 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367025#comment-14367025
 ] 

Eshcar Hillel commented on HBASE-13090:
---

Could be useful to return a *non* empty result array even when the region is 
not exhausted. For example, if the scanner is async (HBASE-13071) the 
application can start iterating over the results instead of waiting for the 
server to collect the entire batch.

 Progress heartbeats for long running scanners
 -

 Key: HBASE-13090
 URL: https://issues.apache.org/jira/browse/HBASE-13090
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Jonathan Lawlor
 Attachments: HBASE-13090-v1.patch, HBASE-13090-v2.patch, 
 HBASE-13090-v3.patch, HBASE-13090-v3.patch


 It can be necessary to set very long timeouts for clients that issue scans 
 over large regions when all data in the region might be filtered out 
 depending on scan criteria. This is a usability concern because it can be 
 hard to identify what worst case timeout to use until scans are 
 occasionally/intermittently failing in production, depending on variable scan 
 criteria. It would be better if the client-server scan protocol can send back 
 periodic progress heartbeats to clients as long as server scanners are alive 
 and making progress.
 This is related but orthogonal to streaming scan (HBASE-13071). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13090) Progress heartbeats for long running scanners

2015-03-18 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367026#comment-14367026
 ] 

Eshcar Hillel commented on HBASE-13090:
---

Could be useful to return a *non* empty result array even when the region is 
not exhausted. For example, if the scanner is async (HBASE-13071) the 
application can start iterating over the results instead of waiting for the 
server to collect the entire batch.

 Progress heartbeats for long running scanners
 -

 Key: HBASE-13090
 URL: https://issues.apache.org/jira/browse/HBASE-13090
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Jonathan Lawlor
 Attachments: HBASE-13090-v1.patch, HBASE-13090-v2.patch, 
 HBASE-13090-v3.patch, HBASE-13090-v3.patch


 It can be necessary to set very long timeouts for clients that issue scans 
 over large regions when all data in the region might be filtered out 
 depending on scan criteria. This is a usability concern because it can be 
 hard to identify what worst case timeout to use until scans are 
 occasionally/intermittently failing in production, depending on variable scan 
 criteria. It would be better if the client-server scan protocol can send back 
 periodic progress heartbeats to clients as long as server scanners are alive 
 and making progress.
 This is related but orthogonal to streaming scan (HBASE-13071). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367274#comment-14367274
 ] 

Matteo Bertozzi commented on HBASE-13241:
-

ok, let's look at it in a different way.
when you look at the other tests written with verifyAllowed()/verifyDenied() it 
is clear what the behavior is without even looking at the action implementation.
when you have everything in a single block like the 
USER1_TESTGROUP_QUALIFIER.runAs(), you have to look at the code and figure out 
what is doing. and then there is the question, why are we testing for denied 
just for a single user? what about the others..
yes, it may result in more code, because you have to break down in more 
actions. but in my opinion it is easier to read and extend. also for someone 
that want to add a new test is easier to decide what to do, if every test is 
using verifyAllowed/denied I should do that too.

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-03-18 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367166#comment-14367166
 ] 

Eshcar Hillel commented on HBASE-13071:
---

Hi everyone,

What would be the next thing to do to get this patch in (now that all the 
lights are green ;) )?

Thanks,
Eshcar

 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 99.eshcar.png, HBASE-13071_98_1.patch, 
 HBASE-13071_trunk_1.patch, HBASE-13071_trunk_10.patch, 
 HBASE-13071_trunk_2.patch, HBASE-13071_trunk_3.patch, 
 HBASE-13071_trunk_4.patch, HBASE-13071_trunk_5.patch, 
 HBASE-13071_trunk_6.patch, HBASE-13071_trunk_7.patch, 
 HBASE-13071_trunk_8.patch, HBASE-13071_trunk_9.patch, 
 HBaseStreamingScanDesign.pdf, HbaseStreamingScanEvaluation.pdf, 
 HbaseStreamingScanEvaluationwithMultipleClients.pdf, gc.eshcar.png, 
 hits.eshcar.png, network.png


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367291#comment-14367291
 ] 

Srikanth Srungarapu commented on HBASE-13241:
-

Completely agree with Matteo. Similar concerns when I posted my previous 
comment. Let's do one thing. I'll try to create a sample patch for only 
verifying at qualifier level (will get feedback from Sean and Matteo too), and 
attach it here. If you like it too, you can build upon it. What say, [~ashish 
singhi]?

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13090) Progress heartbeats for long running scanners

2015-03-18 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367402#comment-14367402
 ] 

Jonathan Lawlor commented on HBASE-13090:
-

[~eshcar] Actually, that is how it works (sorry, I was explicitly clear). When 
the time limit is reached the server will return to the client whatever it has 
accumulated thus far in a heartbeat message. What I meant by #2 is that it is 
possible (in the case of aggressive filtering) that when the time limit is 
reached, the server hasn't had a chance to accumulate ANY Results. In such a 
case, the Result array returned to the client would be empty

 Progress heartbeats for long running scanners
 -

 Key: HBASE-13090
 URL: https://issues.apache.org/jira/browse/HBASE-13090
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Jonathan Lawlor
 Attachments: HBASE-13090-v1.patch, HBASE-13090-v2.patch, 
 HBASE-13090-v3.patch, HBASE-13090-v3.patch


 It can be necessary to set very long timeouts for clients that issue scans 
 over large regions when all data in the region might be filtered out 
 depending on scan criteria. This is a usability concern because it can be 
 hard to identify what worst case timeout to use until scans are 
 occasionally/intermittently failing in production, depending on variable scan 
 criteria. It would be better if the client-server scan protocol can send back 
 periodic progress heartbeats to clients as long as server scanners are alive 
 and making progress.
 This is related but orthogonal to streaming scan (HBASE-13071). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13006) Document visibility label support for groups

2015-03-18 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-13006:
-
Attachment: shell-update-only.patch

Attached a patch that updates the shell commands only, with no doc update.
Since we only commit doc update to master/2.0. This patch is for the other 
versions.

 Document visibility label support for groups
 

 Key: HBASE-13006
 URL: https://issues.apache.org/jira/browse/HBASE-13006
 Project: HBase
  Issue Type: Sub-task
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13006-v2.patch, HBASE-13006-v3.patch, 
 HBASE-13006.patch, shell-update-only.patch


 This is to document the changes added from HBASE-12745.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13090) Progress heartbeats for long running scanners

2015-03-18 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367423#comment-14367423
 ] 

Jonathan Lawlor commented on HBASE-13090:
-

edit: was not* clear

 Progress heartbeats for long running scanners
 -

 Key: HBASE-13090
 URL: https://issues.apache.org/jira/browse/HBASE-13090
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Jonathan Lawlor
 Attachments: HBASE-13090-v1.patch, HBASE-13090-v2.patch, 
 HBASE-13090-v3.patch, HBASE-13090-v3.patch


 It can be necessary to set very long timeouts for clients that issue scans 
 over large regions when all data in the region might be filtered out 
 depending on scan criteria. This is a usability concern because it can be 
 hard to identify what worst case timeout to use until scans are 
 occasionally/intermittently failing in production, depending on variable scan 
 criteria. It would be better if the client-server scan protocol can send back 
 periodic progress heartbeats to clients as long as server scanners are alive 
 and making progress.
 This is related but orthogonal to streaming scan (HBASE-13071). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367427#comment-14367427
 ] 

Ashish Singhi commented on HBASE-13241:
---

Thanks Matteo for the detailed explanation and Srikanth for the offer. 
If everyone is ok with what Matteo says I will prepare a patch by tomorrow 
morning as per IST. 
Thanks again. 

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11425) Cell/DBB end-to-end on the read-path

2015-03-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367500#comment-14367500
 ] 

ramkrishna.s.vasudevan commented on HBASE-11425:


Not able to add to RB.  The RB tool hangs when we try to add a patch.

 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11425-E2E-NotComplete.patch, Offheap reads in 
 HBase using BBs_V2.pdf, Offheap reads in HBase using BBs_final.pdf


 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13270) Setter for Result#getStats is #addResults; confusing!

2015-03-18 Thread stack (JIRA)
stack created HBASE-13270:
-

 Summary: Setter for Result#getStats is #addResults; confusing!
 Key: HBASE-13270
 URL: https://issues.apache.org/jira/browse/HBASE-13270
 Project: HBase
  Issue Type: Improvement
Reporter: stack


Below is our [~larsgeorge] on a finding he made reviewing our API:

Result class having getStats() and addResults(Stats) makes little sense...

...the naming is just weird. You have a getStats() getter and an 
addResults(Stats) setter???

...Especially in the Result class and addResult() is plain misleading...

This issue is about deprecating addResults and replacing it with addStats in 
its place.

The getStats/addResult is recent. It came in with:

{code}
commit a411227b0ebf78b4ee8ae7179e162b54734e77de
Author: Jesse Yates jesse.k.ya...@gmail.com
Date:   Tue Oct 28 16:14:16 2014 -0700

HBASE-5162 Basic client pushback mechanism
...
{code}

RegionLoadStats don't belong in Result if you ask me but better in the 
enveloping on invocations... but that is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13006) Document visibility label support for groups

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367630#comment-14367630
 ] 

Hadoop QA commented on HBASE-13006:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12705379/shell-update-only.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705379

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13295//console

This message is automatically generated.

 Document visibility label support for groups
 

 Key: HBASE-13006
 URL: https://issues.apache.org/jira/browse/HBASE-13006
 Project: HBase
  Issue Type: Sub-task
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13006-v2.patch, HBASE-13006-v3.patch, 
 HBASE-13006.patch, shell-update-only.patch


 This is to document the changes added from HBASE-12745.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12972) Region, a supportable public/evolving subset of HRegion

2015-03-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367464#comment-14367464
 ] 

stack commented on HBASE-12972:
---

I went through patch in rb. Do an edit and lets commit. Can do fine tuning in 
followups. Nice work [~apurtell]

 Region, a supportable public/evolving subset of HRegion
 ---

 Key: HBASE-12972
 URL: https://issues.apache.org/jira/browse/HBASE-12972
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12972-0.98.patch, HBASE-12972.patch


 On HBASE-12566, [~lhofhansl] proposed:
 {quote}
 Maybe we can have a {{Region}} interface that is to {{HRegion}} is what 
 {{Store}} is to {{HStore}}. Store marked with {{@InterfaceAudience.Private}} 
 but used in some coprocessor hooks.
 {quote}
 By example, now coprocessors have to reach into HRegion in order to 
 participate in row and region locking protocols, this is one area where the 
 functionality is legitimate for coprocessors but not for users, so an 
 in-between interface make sense.
 In addition we should promote {{Store}}'s interface audience to 
 LimitedPrivate(COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12975) Supportable SplitTransaction and RegionMergeTransaction interfaces

2015-03-18 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367480#comment-14367480
 ] 

Rajeshbabu Chintaguntla commented on HBASE-12975:
-

[~apurtell]
I am ok to proceed with the current patch. It's very clean now. 

{noformat}
1. Instantiate N SplitTransactions
2. Run each SplitTransaction up to PONR. Can be done in parallel. If there's a 
failure, invoke the rollback method on all and try again and/or do some other 
remediation.
3. Run each SplitTransaction past PONR. Can be done in parallel. If there's a 
failure, the server must abort.
{noformat}
This is the way I am also suggesting earlier to split multiple regions in a 
transaction.  

 Supportable SplitTransaction and RegionMergeTransaction interfaces
 --

 Key: HBASE-12975
 URL: https://issues.apache.org/jira/browse/HBASE-12975
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Andrew Purtell
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12975.patch, HBASE-12975.patch


 Making SplitTransaction, RegionMergeTransaction limited private is required 
 to support local indexing feature in Phoenix to ensure regions colocation. 
 We can ensure region split, regions merge in the coprocessors in few method 
 calls without touching internals like creating zk's, file layout changes or 
 assignments.
 1) stepsBeforePONR, stepsAfterPONR we can ensure split.
 2) meta entries can pass through coprocessors to atomically update with the 
 normal split/merge.
 3) rollback on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13241) Add tests for group level grants

2015-03-18 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367631#comment-14367631
 ] 

Srikanth Srungarapu commented on HBASE-13241:
-

bq. If everyone is ok with what Matteo says I will prepare a patch by tomorrow 
morning as per IST. 
Sure, go for it.

 Add tests for group level grants
 

 Key: HBASE-13241
 URL: https://issues.apache.org/jira/browse/HBASE-13241
 Project: HBase
  Issue Type: Improvement
  Components: security, test
Reporter: Sean Busbey
Assignee: Ashish Singhi
Priority: Critical
 Attachments: HBASE-13241-v1.patch, HBASE-13241-v2.patch, 
 HBASE-13241-v3.patch, HBASE-13241-v4.patch, HBASE-13241-v5.patch, 
 HBASE-13241.patch


 We need to have tests for group-level grants for various scopes. ref: 
 HBASE-13239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367841#comment-14367841
 ] 

Mikhail Antonov commented on HBASE-13273:
-

2 options I can see is to make EMPTY_RESULT object private and provide method 
getEmptyResult(), returning new cloned empty Result, or make Result object 
immutable (which means copyFrom() method should be removed)? Doesn't look like 
backward-compatible either?

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367845#comment-14367845
 ] 

Lars Hofhansl commented on HBASE-11195:
---

Pretty sure this is wrong. We return true (please run the major compaction) 
if the current locality index is  the min requested one.
As it stands we'll now - by default - _always_ compact all the old files, which 
we did not do before. This looks like a critical issue to me.

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367854#comment-14367854
 ] 

churro morales commented on HBASE-11195:


[~lhofhansl] oh my, I don't know how that happened but it looks to me that the 
98 patch is incorrect and a quick look at the other patches shows they are 
correct, I have no idea how this happened.  I am so sorry I can get a proper 
patch to you guys asap.  

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367859#comment-14367859
 ] 

churro morales commented on HBASE-11195:


do you want me to create a new ticket with a patch, or just add it here?  It 
should be blockLocalityIndex  comConf.getMinLocalityToForceCompact() looks to 
be fine for trunk and 94 after looking at the patches

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367874#comment-14367874
 ] 

Lars Hofhansl commented on HBASE-11195:
---

I'm happy to that (new ticket), just wanted to confirm that I did not miss 
anything. I'll do the fix, and then we'll just release 0.98.12 quickly (right 
[~apurtell] :) )

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11195:
--
Fix Version/s: (was: 0.94.27)

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367880#comment-14367880
 ] 

Lars Hofhansl commented on HBASE-11195:
---

Actually lemme apply this to 0.94 (I hadn't before).

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13273:

Fix Version/s: 0.98.12
   1.1.0
   1.0.1
   2.0.0

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13269) Limit result array preallocation to avoid OOME with large scan caching values

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367673#comment-14367673
 ] 

Andrew Purtell edited comment on HBASE-13269 at 3/18/15 6:59 PM:
-

Since both [~stack] and [~lhofhansl] chimed in with regret about not presizing, 
I will update this to do a max of 100. New patch coming shortly. Please let me 
know if you like that better. 


was (Author: apurtell):
Since both [~stack] and [~lhofhansl] chimed in with regret about not presizing, 
I will update this to do a min of 100. New patch coming shortly. Please let me 
know if you like that better. 

 Limit result array preallocation to avoid OOME with large scan caching values
 -

 Key: HBASE-13269
 URL: https://issues.apache.org/jira/browse/HBASE-13269
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 1.0.1, 0.98.12

 Attachments: HBASE-13269-0.98.patch, HBASE-13269-1.0.patch


 Scan#setCaching(Integer.MAX_VALUE) will likely terminate the regionserver 
 with an OOME due to preallocation of the result array according to this 
 parameter.  We should limit the preallocation to some sane value. Definitely 
 affects 0.98 (fix needed to HRegionServer) and 1.0.x (fix needed to 
 RsRPCServices), not sure about later versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13271) Table#puts(ListPut) operation is indeterminate; remove!

2015-03-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367678#comment-14367678
 ] 

stack commented on HBASE-13271:
---

Related, HTableInterface deprecates getWriteBufferSize and setWriteBufferSize.  
These methods are in the sub-Interface Table only here they are not deprecated. 
So, user may be getting wrong message -- especially if flush comes back into 
Table.  Needs clean up in alignment with how we deal with ListPut

 Table#puts(ListPut) operation is indeterminate; remove!
 -

 Key: HBASE-13271
 URL: https://issues.apache.org/jira/browse/HBASE-13271
 Project: HBase
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.0
Reporter: stack

 Another API issue found by [~larsgeorge]:
 Table.put(ListPut) is questionable after the API change.
 {code}
 [Mar-17 9:21 AM] Lars George: Table.put(ListPut) is weird since you cannot 
 flush partial lists
 [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
 put() call returns with a local exception (say empty Put) and then you have 2 
 that are in the buffer
 [Mar-17 9:21 AM] Lars George: but how to you force commit them?
 [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
 that is gone now
 [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
 [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
 BufferedMutation neither
 [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
 call close()
 [Mar-17 9:23 AM] Lars George: that is just weird to explain
 {code}
 So, Table needs to get flush back or we deprecate this method or it flushes 
 immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13274) Fix misplaced deprecation in Delete#addXYZ

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov reassigned HBASE-13274:
---

Assignee: Mikhail Antonov

 Fix misplaced deprecation in Delete#addXYZ
 --

 Key: HBASE-13274
 URL: https://issues.apache.org/jira/browse/HBASE-13274
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.0
Reporter: stack
Assignee: Mikhail Antonov

 Found by [~larsgeorge]
 {code}
 All deleteXYZ() were deprecated in Delete in favour of the matching addXYZ() 
 (to mirror Put, Get, etc.) - _but_ for deleteFamilyVersion(). What is worse 
 is, the @deprecated for it was added to the addFamilyVersion() replacement! 
 Oh man.
 * @deprecated Since hbase-1.0.0. Use {@link #addFamilyVersion(byte[], long)}
 
  */
  @Deprecated
  public Delete addFamilyVersion(final byte [] family, 
 final long timestamp) {
 The deprecated message is right, but on the wrong method
 (areyoukiddingme)
 Well, I presume it was done right, and will steer clear of deleteXYZ() in 
 favor of addXYZ()
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13274) Fix misplaced deprecation in Delete#addXYZ

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-13274:

Attachment: HBASE-13274.patch

patch for master

 Fix misplaced deprecation in Delete#addXYZ
 --

 Key: HBASE-13274
 URL: https://issues.apache.org/jira/browse/HBASE-13274
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.0
Reporter: stack
Assignee: Mikhail Antonov
 Attachments: HBASE-13274.patch


 Found by [~larsgeorge]
 {code}
 All deleteXYZ() were deprecated in Delete in favour of the matching addXYZ() 
 (to mirror Put, Get, etc.) - _but_ for deleteFamilyVersion(). What is worse 
 is, the @deprecated for it was added to the addFamilyVersion() replacement! 
 Oh man.
 * @deprecated Since hbase-1.0.0. Use {@link #addFamilyVersion(byte[], long)}
 
  */
  @Deprecated
  public Delete addFamilyVersion(final byte [] family, 
 final long timestamp) {
 The deprecated message is right, but on the wrong method
 (areyoukiddingme)
 Well, I presume it was done right, and will steer clear of deleteXYZ() in 
 favor of addXYZ()
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13269) Limit result array preallocation to avoid OOME with large scan caching values

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367715#comment-14367715
 ] 

Hadoop QA commented on HBASE-13269:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705413/HBASE-13269-1.0.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705413

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13296//console

This message is automatically generated.

 Limit result array preallocation to avoid OOME with large scan caching values
 -

 Key: HBASE-13269
 URL: https://issues.apache.org/jira/browse/HBASE-13269
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 1.0.1, 0.98.12

 Attachments: HBASE-13269-0.98.patch, HBASE-13269-0.98.patch, 
 HBASE-13269-1.0.patch, HBASE-13269-1.0.patch


 Scan#setCaching(Integer.MAX_VALUE) will likely terminate the regionserver 
 with an OOME due to preallocation of the result array according to this 
 parameter.  We should limit the preallocation to some sane value. Definitely 
 affects 0.98 (fix needed to HRegionServer) and 1.0.x (fix needed to 
 RsRPCServices), not sure about later versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13274) Fix misplaced deprecation in Delete#addXYZ

2015-03-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367694#comment-14367694
 ] 

stack commented on HBASE-13274:
---

[~mantonov] Yes sir, as per usual. Thanks.

 Fix misplaced deprecation in Delete#addXYZ
 --

 Key: HBASE-13274
 URL: https://issues.apache.org/jira/browse/HBASE-13274
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.0.0
Reporter: stack

 Found by [~larsgeorge]
 {code}
 All deleteXYZ() were deprecated in Delete in favour of the matching addXYZ() 
 (to mirror Put, Get, etc.) - _but_ for deleteFamilyVersion(). What is worse 
 is, the @deprecated for it was added to the addFamilyVersion() replacement! 
 Oh man.
 * @deprecated Since hbase-1.0.0. Use {@link #addFamilyVersion(byte[], long)}
 
  */
  @Deprecated
  public Delete addFamilyVersion(final byte [] family, 
 final long timestamp) {
 The deprecated message is right, but on the wrong method
 (areyoukiddingme)
 Well, I presume it was done right, and will steer clear of deleteXYZ() in 
 favor of addXYZ()
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature

2015-03-18 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14366767#comment-14366767
 ] 

Eshcar Hillel commented on HBASE-13071:
---

Yes it's all about setting the delays, but I don't want to change  them to make 
the results look better.They are there just to make the point.

  From: Edward Bortnikov (JIRA) j...@apache.org
 To: esh...@yahoo-inc.com 
 Sent: Monday, March 16, 2015 7:52 AM
 Subject: [jira] [Commented] (HBASE-13071) Hbase Streaming Scan Feature
   

    [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362777#comment-14362777
 ] 

Edward Bortnikov commented on HBASE-13071:
--

Eshcar,
Do you have an idea why there are still steps in the async graph? This probably 
means that our delays are not long enough. 
Eddie 


    On Monday, March 16, 2015 1:14 AM, Eshcar Hillel (JIRA) j...@apache.org 
wrote:
  

 
    [ 
https://issues.apache.org/jira/browse/HBASE-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13071:
--
    Attachment: HBASE-13071_trunk_10.patch




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



 Hbase Streaming Scan Feature
 

 Key: HBASE-13071
 URL: https://issues.apache.org/jira/browse/HBASE-13071
 Project: HBase
  Issue Type: New Feature
Reporter: Eshcar Hillel
 Attachments: 99.eshcar.png, HBASE-13071_98_1.patch, 
 HBASE-13071_trunk_1.patch, HBASE-13071_trunk_10.patch, 
 HBASE-13071_trunk_2.patch, HBASE-13071_trunk_3.patch, 
 HBASE-13071_trunk_4.patch, HBASE-13071_trunk_5.patch, 
 HBASE-13071_trunk_6.patch, HBASE-13071_trunk_7.patch, 
 HBASE-13071_trunk_8.patch, HBASE-13071_trunk_9.patch, 
 HBaseStreamingScanDesign.pdf, HbaseStreamingScanEvaluation.pdf, 
 HbaseStreamingScanEvaluationwithMultipleClients.pdf, gc.eshcar.png, 
 hits.eshcar.png, network.png


 A scan operation iterates over all rows of a table or a subrange of the 
 table. The synchronous nature in which the data is served at the client side 
 hinders the speed the application traverses the data: it increases the 
 overall processing time, and may cause a great variance in the times the 
 application waits for the next piece of data.
 The scanner next() method at the client side invokes an RPC to the 
 regionserver and then stores the results in a cache. The application can 
 specify how many rows will be transmitted per RPC; by default this is set to 
 100 rows. 
 The cache can be considered as a producer-consumer queue, where the hbase 
 client pushes the data to the queue and the application consumes it. 
 Currently this queue is synchronous, i.e., blocking. More specifically, when 
 the application consumed all the data from the cache --- so the cache is 
 empty --- the hbase client retrieves additional data from the server and 
 re-fills the cache with new data. During this time the application is blocked.
 Under the assumption that the application processing time can be balanced by 
 the time it takes to retrieve the data, an asynchronous approach can reduce 
 the time the application is waiting for data.
 We attach a design document.
 We also have a patch that is based on a private branch, and some evaluation 
 results of this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13271) Table#puts(ListPut) operation is indeterminate; remove!

2015-03-18 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368052#comment-14368052
 ] 

Nicolas Liochon commented on HBASE-13271:
-

Oh ok. Thanks for the explanation. Then the call to batch seems to be the 
perfect solution.

 Table#puts(ListPut) operation is indeterminate; remove!
 -

 Key: HBASE-13271
 URL: https://issues.apache.org/jira/browse/HBASE-13271
 Project: HBase
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.0
Reporter: stack

 Another API issue found by [~larsgeorge]:
 Table.put(ListPut) is questionable after the API change.
 {code}
 [Mar-17 9:21 AM] Lars George: Table.put(ListPut) is weird since you cannot 
 flush partial lists
 [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
 put() call returns with a local exception (say empty Put) and then you have 2 
 that are in the buffer
 [Mar-17 9:21 AM] Lars George: but how to you force commit them?
 [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
 that is gone now
 [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
 [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
 BufferedMutation neither
 [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
 call close()
 [Mar-17 9:23 AM] Lars George: that is just weird to explain
 {code}
 So, Table needs to get flush back or we deprecate this method or it flushes 
 immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-03-18 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-12751:
--
Attachment: HBASE-12751.patch

Workflow failure, sorry -- clicked submit patch before adding attachment.

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Nate Edel
 Attachments: HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-03-18 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-12751:
--
Status: Patch Available  (was: Open)

 Allow RowLock to be reader writer
 -

 Key: HBASE-12751
 URL: https://issues.apache.org/jira/browse/HBASE-12751
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Elliott Clark
Assignee: Nate Edel
 Attachments: HBASE-12751.patch


 Right now every write operation grabs a row lock. This is to prevent values 
 from changing during a read modify write operation (increment or check and 
 put). However it limits parallelism in several different scenarios.
 If there are several puts to the same row but different columns or stores 
 then this is very limiting.
 If there are puts to the same column then mvcc number should ensure a 
 consistent ordering. So locking is not needed.
 However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13272) Get.setClosestRowBefore() breaks specific column Get

2015-03-18 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368077#comment-14368077
 ] 

Nick Dimiduk commented on HBASE-13272:
--

Sounds like it's best to deprecate/remove.

 Get.setClosestRowBefore() breaks specific column Get
 

 Key: HBASE-13272
 URL: https://issues.apache.org/jira/browse/HBASE-13272
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Trivial

 Via [~larsgeorge]
 Get.setClosestRowBefore() is breaking a specific Get that specifies a column. 
 If you set the latter to true it will return the _entire_ row!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13276) Fix incorrect condition for minimum block locality in 0.98

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368102#comment-14368102
 ] 

Andrew Purtell commented on HBASE-13276:


+1

 Fix incorrect condition for minimum block locality in 0.98
 --

 Key: HBASE-13276
 URL: https://issues.apache.org/jira/browse/HBASE-13276
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.98.12

 Attachments: HBASE-11195-0.98.v1.patch


 0.98 only. Parent somehow was incorrect. One-liner to fix it.
 But it's critical as we perform potentially _way_ more compactions now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368123#comment-14368123
 ] 

Hudson commented on HBASE-11195:


FAILURE: Integrated in HBase-0.94-JDK7 #234 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/234/])
HBASE-11195 Addendum for TestHeapSize. (larsh: rev 
260f2137bdb8b4ae839f5cc285509f34e31a006b)
* src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195-0.98.v1.patch, HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13259) mmap() based BucketCache IOEngine

2015-03-18 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368154#comment-14368154
 ] 

zhangduo commented on HBASE-13259:
--

I mean could we test it with a size much larger than available memory? i.e., 
100G RAM, 500G bucket cache on SSD?
If we only test it with a size smaller than available memory, then I think we 
need to beat the offheap engine, not file engine(It is good if you can beat 
both of them:))

 mmap() based BucketCache IOEngine
 -

 Key: HBASE-13259
 URL: https://issues.apache.org/jira/browse/HBASE-13259
 Project: HBase
  Issue Type: New Feature
  Components: BlockCache
Affects Versions: 0.98.10
Reporter: Zee Chen
 Fix For: 2.2.0

 Attachments: HBASE-13259-v2.patch, HBASE-13259.patch, ioread-1.svg, 
 mmap-0.98-v1.patch, mmap-1.svg, mmap-trunk-v1.patch


 Of the existing BucketCache IOEngines, FileIOEngine uses pread() to copy data 
 from kernel space to user space. This is a good choice when the total working 
 set size is much bigger than the available RAM and the latency is dominated 
 by IO access. However, when the entire working set is small enough to fit in 
 the RAM, using mmap() (and subsequent memcpy()) to move data from kernel 
 space to user space is faster. I have run some short keyval gets tests and 
 the results indicate a reduction of 2%-7% of kernel CPU on my system, 
 depending on the load. On the gets, the latency histograms from mmap() are 
 identical to those from pread(), but peak throughput is close to 40% higher.
 This patch modifies ByteByfferArray to allow it to specify a backing file.
 Example for using this feature: set  hbase.bucketcache.ioengine to 
 mmap:/dev/shm/bucketcache.0 in hbase-site.xml.
 Attached perf measured CPU usage breakdown in flames graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13262:
---
Comment: was deleted

(was: (ugh sorry for spam/early-post))

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13262:
---
Comment: was deleted

(was: Ok, been a while since I posted some progress, here's my current 
understanding of things and hopefully an easier to grok statement of the 
problem:

When clients request a batch of rows which is larger than the server is 
configured to return)

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368225#comment-14368225
 ] 

Andrew Purtell commented on HBASE-13262:


No problem, I will delete these now. Post again when ready

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13276) Fix incorrect condition for minimum block locality in 0.98

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368082#comment-14368082
 ] 

Andrew Purtell commented on HBASE-13276:


Like [~churromorales] says on parent, I don't see how this happened either. 
Looks like a very unfortunate and hard to spot typo. 

I can call a RC for 0.98.12 on Monday March 23 so we can have a release at the 
end of the month, which is roughly on target with the 0.98 release cadence. Is 
that soon enough? 

 Fix incorrect condition for minimum block locality in 0.98
 --

 Key: HBASE-13276
 URL: https://issues.apache.org/jira/browse/HBASE-13276
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.98.12

 Attachments: HBASE-11195-0.98.v1.patch


 0.98 only. Parent somehow was incorrect. One-liner to fix it.
 But it's critical as we perform potentially _way_ more compactions now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12972) Region, a supportable public/evolving subset of HRegion

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368128#comment-14368128
 ] 

Hadoop QA commented on HBASE-12972:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705439/HBASE-12972.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705439

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 355 
new or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 7 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1919 checkstyle errors (more than the master's current 1917 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+LOG.trace(High priority because region= + 
region.getRegionInfo().getRegionNameAsString());
+.abort(Exception during region  + 
getRegionInfo().getRegionNameAsString() +  initialization.);
+LOG.info(getName() +  requesting flush for region  + 
r.getRegionInfo().getRegionNameAsString() +
+Region region = 
TEST_UTIL.getRSForFirstRegionInTable(tableName).getFromOnlineRegions(regionName);
+testRegionWithFamilies(family1).bulkLoadHFiles(new ArrayListPairbyte[], 
String(), false, null);
+  private Region initHRegion(HTableDescriptor htd, byte[] startKey, byte[] 
stopKey, int replicaId) throws IOException {
+  private void putData(Region region, int startRow, int numRows, byte[] qf, 
byte[]... families) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestIOFencing
  
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplication(TestRegionReplicaReplicationEndpoint.java:195)
at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationWith3Replicas(TestRegionReplicaReplicationEndpoint.java:255)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13299//artifact/patchprocess/checkstyle-aggregate.html

  

[jira] [Commented] (HBASE-13257) Show coverage report on jenkins

2015-03-18 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368164#comment-14368164
 ] 

zhangduo commented on HBASE-13257:
--

I ran it several times, seems not much worse than original TRUNK build, the 
failed test also fails on TRUNK sometimes.

https://builds.apache.org/job/HBase-TRUNK-jacoco/

So what do you guys suggest now? [~stack], [~busbey]
Thanks.

 Show coverage report on jenkins
 ---

 Key: HBASE-13257
 URL: https://issues.apache.org/jira/browse/HBASE-13257
 Project: HBase
  Issue Type: Task
Reporter: zhangduo
Assignee: zhangduo
Priority: Minor

 Think of showing jacoco coverage report on https://builds.apache.org .
 And there is an advantage of showing it on jenkins that the jenkins jacoco 
 plugin can handle cross module coverage.
 Can not do it locally since https://github.com/jacoco/jacoco/pull/97 is still 
 pending.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13277) add mob_threshold option to load test tool

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368175#comment-14368175
 ] 

Hadoop QA commented on HBASE-13277:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12705444/HBASE-13277.hbase-11339.patch
  against hbase-11339 branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705444

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testScanAtomicity(TestAcidGuarantees.java:376)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13300//console

This message is automatically generated.

 add mob_threshold option to load test tool
 --

 Key: HBASE-13277
 URL: https://issues.apache.org/jira/browse/HBASE-13277
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: hbase-11339

 Attachments: HBASE-13277.hbase-11339.patch


 This adds '-mob_threshold value' option to the load test tool to simplify 
 mob load testing 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13188) java.lang.ArithmeticException issue in BoundedByteBufferPool.putBuffer

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13188:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 java.lang.ArithmeticException issue in BoundedByteBufferPool.putBuffer
 --

 Key: HBASE-13188
 URL: https://issues.apache.org/jira/browse/HBASE-13188
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: HBASE-13188.patch


 Running a range scan with PE tool with 25 threads getting this error
 {code}
 java.lang.ArithmeticException: / by zero
 at 
 org.apache.hadoop.hbase.io.BoundedByteBufferPool.putBuffer(BoundedByteBufferPool.java:104)
 at org.apache.hadoop.hbase.ipc.RpcServer$Call.done(RpcServer.java:325)
 at 
 org.apache.hadoop.hbase.ipc.RpcServer$Responder.processResponse(RpcServer.java:1078)
 at 
 org.apache.hadoop.hbase.ipc.RpcServer$Responder.processAllResponses(RpcServer.java:1103)
 at 
 org.apache.hadoop.hbase.ipc.RpcServer$Responder.doAsyncWrite(RpcServer.java:1036)
 at 
 org.apache.hadoop.hbase.ipc.RpcServer$Responder.doRunLoop(RpcServer.java:956)
 at 
 org.apache.hadoop.hbase.ipc.RpcServer$Responder.run(RpcServer.java:891)
 {code}
 I checked in the trunk code also.  I think the comment in the code suggests 
 that the size will not be exact so there is a chance that it could be even 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13268) Backport the HBASE-7781 security test updates to use the MiniKDC

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13268:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Backport the HBASE-7781 security test updates to use the MiniKDC
 

 Key: HBASE-13268
 URL: https://issues.apache.org/jira/browse/HBASE-13268
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.13


 Consider backport of the security test updates to use the MiniKDC that are 
 subtasks of HBASE-7781. Would be good to improve test coverage of security 
 code in 0.98 branch, as long as neither:
 - The changes are a PITA to backport
 - The changes break a compatibility requirement
 - The changes introduce test instability
  
 Investigate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368178#comment-14368178
 ] 

Andrew Purtell commented on HBASE-13273:


I'm rolling the 0.98.12 RC tonight. Did you want to get this in ahead of that? 
Or I can move it out.

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
Assignee: Mikhail Antonov
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13267) Deprecate or remove isFileDeletable from SnapshotHFileCleaner

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13267:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Deprecate or remove isFileDeletable from SnapshotHFileCleaner
 -

 Key: HBASE-13267
 URL: https://issues.apache.org/jira/browse/HBASE-13267
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13


 The isFileDeletable method in SnapshotHFileCleaner became vestigial after 
 HBASE-12627, lets remove it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13221) HDFS Transparent Encryption breaks WAL writing

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13221:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 HDFS Transparent Encryption breaks WAL writing
 --

 Key: HBASE-13221
 URL: https://issues.apache.org/jira/browse/HBASE-13221
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.0, 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13


 We need to detect when HDFS Transparent Encryption (Hadoop 2.6.0+) is enabled 
 and fall back to more synchronization in the WAL to prevent catastrophic 
 failure under load.
 See HADOOP-11708 for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12273) Generate .tabledesc file during upgrading if missing

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12273:
---
Fix Version/s: (was: 0.98.12)
   0.98.13
   Status: Open  (was: Patch Available)

 Generate .tabledesc file during upgrading if missing
 

 Key: HBASE-12273
 URL: https://issues.apache.org/jira/browse/HBASE-12273
 Project: HBase
  Issue Type: Sub-task
  Components: Admin
Affects Versions: 0.98.7, 1.0.0
Reporter: Yi Deng
Assignee: Yi Deng
  Labels: upgrade
 Fix For: 1.1.0, 0.98.13

 Attachments: 
 1.0-0001-HBASE-12273-Add-a-tool-for-fixing-missing-TableDescr.patch, 
 1.0-0001-INTERNAL-Add-a-tool-for-fixing-missing-TableDescript.patch


 Generate .tabledesc file during upgrading if missing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11819:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Unit test for CoprocessorHConnection 
 -

 Key: HBASE-11819
 URL: https://issues.apache.org/jira/browse/HBASE-11819
 Project: HBase
  Issue Type: Test
Reporter: Andrew Purtell
Assignee: Talat UYARER
Priority: Minor
  Labels: newbie++
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: HBASE-11819v4-master.patch, HBASE-11819v5-master 
 (1).patch, HBASE-11819v5-master.patch, HBASE-11819v5-master.patch, 
 HBASE-11819v5-v0.98.patch, HBASE-11819v5-v1.0.patch


 Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11290) Unlock RegionStates

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11290:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Unlock RegionStates
 ---

 Key: HBASE-11290
 URL: https://issues.apache.org/jira/browse/HBASE-11290
 Project: HBase
  Issue Type: Sub-task
Reporter: Francis Liu
Assignee: Virag Kothari
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, 
 HBASE-11290.draft.patch


 Even though RegionStates is a highly accessed data structure in HMaster. Most 
 of it's methods are synchronized. Which limits concurrency. Even simply 
 making some of the getters non-synchronized by using concurrent data 
 structures has helped with region assignments. We can go as simple as this 
 approach or create locks per region or a bucket lock per region bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12148:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Remove TimeRangeTracker as point of contention when many threads writing a 
 Store
 

 Key: HBASE-12148
 URL: https://issues.apache.org/jira/browse/HBASE-12148
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Affects Versions: 2.0.0, 0.99.1
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: 
 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, Screen 
 Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368219#comment-14368219
 ] 

Josh Elser commented on HBASE-13262:


Ok, been a while since I posted some progress, here's my current understanding 
of things and hopefully an easier to grok statement of the problem:

When clients request a batch of rows which is larger than the server is 
configured to return

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13262:
---
Comment: was deleted

(was: No problem, I will delete these now. Post again when ready)

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13096) NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL encryption and Phoenix secondary indexes

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13096:
---
Fix Version/s: 0.98.13
 Assignee: Andrew Purtell

On the board for .13

 NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL 
 encryption and Phoenix secondary indexes
 

 Key: HBASE-13096
 URL: https://issues.apache.org/jira/browse/HBASE-13096
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Andrew Purtell
Assignee: Andrew Purtell
  Labels: phoenix
 Fix For: 0.98.13


 On user@phoenix Dhavi Rami reported:
 {quote}
 I tried using phoenix in hBase with Transparent Encryption of Data At Rest 
 enabled ( AES encryption) 
 Works fine for a table with primary key column.
 But it doesn't work if I create Secondary index on that tables.I tried to dig 
 deep into the problem and found WAL file encryption throws exception when I 
 have Global Secondary Index created on my mutable table.
 Following is the error I was getting on one of the region server.
 {noformat}
 2015-02-20 10:44:48,768 ERROR 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog: UNEXPECTED
 java.lang.NullPointerException
 at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:767)
 at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:754)
 at org.apache.hadoop.hbase.KeyValue.getKeyLength(KeyValue.java:1253)
 at 
 org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec$EncryptedKvEncoder.write(SecureWALCellCodec.java:194)
 at 
 org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:117)
 at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$AsyncWriter.run(FSHLog.java:1137)
 at java.lang.Thread.run(Thread.java:745)
 2015-02-20 10:44:48,776 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
 regionserver60020-WAL.AsyncWriter exiting
 {noformat}
 I had to disable WAL encryption, and it started working fine with secondary 
 Index. So Hfile encryption works with secondary index but WAL encryption 
 doesn't work.
 {quote}
 Parking this here for later investigation. For now I'm going to assume this 
 is something in SecureWALCellCodec that needs looking at, but if it turns out 
 to be a Phoenix indexer issue I will move this JIRA there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13200) Improper configuration can leads to endless lease recovery during failover

2015-03-18 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-13200:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch. [~heliangliang]

 Improper configuration can leads to endless lease recovery during failover
 --

 Key: HBASE-13200
 URL: https://issues.apache.org/jira/browse/HBASE-13200
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Reporter: He Liangliang
Assignee: He Liangliang
 Fix For: 2.0.0

 Attachments: HBASE-13200.patch


 When a node (DN+RS) has machine/OS level failure, another RS will try to do 
 lease recovery for the log file. It will retry for every 
 hbase.lease.recovery.dfs.timeout (default to 61s) from the second time. When 
 the hdfs configuration is not properly configured (e.g. socket connection 
 timeout) and without patch HDFS-4721, the lease recovery time can exceeded 
 the timeout specified by hbase.lease.recovery.dfs.timeout. This will lead to  
 endless retries and preemptions until the final timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13114) [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta

2015-03-18 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368222#comment-14368222
 ] 

Esteban Gutierrez commented on HBASE-13114:
---

Dug into this today, the problem is very similar to HBASE-13182, which is 
basically that Admin is not really synchronous and it can return earlier when 
getTableDescriptorByTableName() is null. If we add a sleep of few 100 ms before 
scanning META after deleting the table the problem is less frequent, however 
that is only masquerading the real problem. The best option seems to use a 
latch for now in the same way [~mbertozzi] did in HBASE-13179 and HBASE-13182.


 [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 -

 Key: HBASE-13114
 URL: https://issues.apache.org/jira/browse/HBASE-13114
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
 Attachments: 13114.txt


 I've seen this fail a few times. It just happened now on internal rig.  
 Looking into it
 {code}
 REGRESSION:  
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 Error Message:
 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at org.junit.Assert.assertEquals(Assert.java:542)
 at 
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta(TestEnableTableHandler.java:151)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13269) Limit result array preallocation to avoid OOME with large scan caching values

2015-03-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368272#comment-14368272
 ] 

Hudson commented on HBASE-13269:


FAILURE: Integrated in HBase-1.0 #812 (See 
[https://builds.apache.org/job/HBase-1.0/812/])
HBASE-13269 Limit result array preallocation to avoid OOME with large scan 
caching values (apurtell: rev d443c7096fced912b8d0cc5c63f9f013762e6122)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


 Limit result array preallocation to avoid OOME with large scan caching values
 -

 Key: HBASE-13269
 URL: https://issues.apache.org/jira/browse/HBASE-13269
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 1.0.1, 0.98.12

 Attachments: HBASE-13269-0.98.patch, HBASE-13269-0.98.patch, 
 HBASE-13269-1.0.patch, HBASE-13269-1.0.patch


 Scan#setCaching(Integer.MAX_VALUE) will likely terminate the regionserver 
 with an OOME due to preallocation of the result array according to this 
 parameter.  We should limit the preallocation to some sane value. Definitely 
 affects 0.98 (fix needed to HRegionServer) and 1.0.x (fix needed to 
 RsRPCServices), not sure about later versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13279) Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13279:
---
Status: Patch Available  (was: Open)

 Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM
 --

 Key: HBASE-13279
 URL: https://issues.apache.org/jira/browse/HBASE-13279
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-src-main-asciidoc-asciidoctor.css-to-RAT-exclusi.patch


 After copying back the latest doc updates from trunk to 0.98 branch for a 
 release, the release audit failed due to src/main/asciidoc/asciidoctor.css, 
 which is MIT licensed but only by reference. Exclude it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13279) Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13279:
---
Attachment: 0001-Add-src-main-asciidoc-asciidoctor.css-to-RAT-exclusi.patch

 Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM
 --

 Key: HBASE-13279
 URL: https://issues.apache.org/jira/browse/HBASE-13279
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-src-main-asciidoc-asciidoctor.css-to-RAT-exclusi.patch


 After copying back the latest doc updates from trunk to 0.98 branch for a 
 release, the release audit failed due to src/main/asciidoc/asciidoctor.css, 
 which is MIT licensed but only by reference. Exclude it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13279) Add src/main/asciidoc/asciidoctor.css to RAT exclusion list in POM

2015-03-18 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-13279:
--

 Summary: Add src/main/asciidoc/asciidoctor.css to RAT exclusion 
list in POM
 Key: HBASE-13279
 URL: https://issues.apache.org/jira/browse/HBASE-13279
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0
 Attachments: 
0001-Add-src-main-asciidoc-asciidoctor.css-to-RAT-exclusi.patch

After copying back the latest doc updates from trunk to 0.98 branch for a 
release, the release audit failed due to src/main/asciidoc/asciidoctor.css, 
which is MIT licensed but only by reference. Exclude it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368375#comment-14368375
 ] 

Jonathan Lawlor commented on HBASE-13262:
-

bq. The client ultimately requests the server return a batch of size 
'hbase.client.scanner.max.result.size' and then believe that the server 
returned less data than that limit.

Exactly correct. The client looks at the Results returned from the server and 
from its point of view it sees that neither the maxResultSize or caching limit 
has been reached. The only explanation it can come up with as to why the server 
would return these Results is that it must have exhausted the region (otherwise 
it has no reason to stop accumulating Results). But the server stopped because 
from its PoV the size limit was reached. There is a miscommunication

bq. I still don't completely understand what is causing the difference on the 
server-side in the first place (over 0.98)

Ya, it's a little cryptic because the exact same function is used to calculate 
the size server side and client side. I would recommend adding some logs that 
allows you to see the estimatedHeapSize of a cell server side versus client 
side and see where they differ. My guess would be that somehow the Cell on the 
client side returns a slightly lower heap size estimation than the SAME Cell on 
the server (I don't believe it's related to the NextState size bubbling up 
since NextState is only in branch-1+ and the issue is branch-1.0+). Maybe the 
Cells/Results are serialized in such a way that these calculations are slightly 
different? Somehow the server's size calculation is larger than the client's 
size calculation.

However, even when we do understand why the server's size calculation is 
different from the client's it may not help (of course we can only know once 
the issue has been identified). Like you said, the underlying problem is that 
the client shouldn't even be performing a size calculation but rather being 
told by the server why the Results were returned. As long as there is a 
possibility for the server and client to disagree on why the Results were 
returned, it is possible to incorrectly jump between regions. Fixing the size 
calculation may be sufficient for resolving this issue, but going forward I 
think your idea of passing information back to the client in the ScanResult 
will be the best way to go.

bq. Ultimately, the underlying problem is likely best addressed from the stance 
that a scanner shouldn't be performing special logic based on the size of the 
batch of data returned from a server

Agreed

bq. The server already maintains a nice enum of the reason which it returns a 
batch of results to a client via NextState$State

Just a note: NextState was introduced with HBASE-11544 which has only been 
backported to branch-1+ at this point. Since this issue appears in branch-1.0+, 
returning the NextState$State enum would require backporting that feature 
further. 

bq. I'm currently of the opinion that it's ideal to pass this information back 
to the client via the ScanResult 

I agree that somehow we need to communicate the reasoning behind why these 
Results were returned to the client rather than looking at the Result[] and 
making an educated guess

bq. 0.98 clients running against 1.x could see this problem, although I have 
not tested that to confirm it happens.

I suspect you're correct


 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368396#comment-14368396
 ] 

Mikhail Antonov commented on HBASE-13273:
-

bq. 0 failures (±0) , 24 skipped (+17)

I guess not related.

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
Assignee: Mikhail Antonov
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13269) Limit result array preallocation to avoid OOME with large scan caching values

2015-03-18 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368100#comment-14368100
 ] 

Lars Hofhansl commented on HBASE-13269:
---

+1

 Limit result array preallocation to avoid OOME with large scan caching values
 -

 Key: HBASE-13269
 URL: https://issues.apache.org/jira/browse/HBASE-13269
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 1.0.1, 0.98.12

 Attachments: HBASE-13269-0.98.patch, HBASE-13269-0.98.patch, 
 HBASE-13269-1.0.patch, HBASE-13269-1.0.patch


 Scan#setCaching(Integer.MAX_VALUE) will likely terminate the regionserver 
 with an OOME due to preallocation of the result array according to this 
 parameter.  We should limit the preallocation to some sane value. Definitely 
 affects 0.98 (fix needed to HRegionServer) and 1.0.x (fix needed to 
 RsRPCServices), not sure about later versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13271) Table#puts(ListPut) operation is indeterminate; remove!

2015-03-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368158#comment-14368158
 ] 

Mikhail Antonov commented on HBASE-13271:
-

bq. That leaves the possibility of some puts being left over in the 
bufferedMutator's buffer, and the user will have no way of knowing.

Wondering if BufferedMutaror should have a method to retrive the number of 
mutations in buffer (writeAsyncBuffer.size() in current impl)?

 Table#puts(ListPut) operation is indeterminate; remove!
 -

 Key: HBASE-13271
 URL: https://issues.apache.org/jira/browse/HBASE-13271
 Project: HBase
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.0
Reporter: stack

 Another API issue found by [~larsgeorge]:
 Table.put(ListPut) is questionable after the API change.
 {code}
 [Mar-17 9:21 AM] Lars George: Table.put(ListPut) is weird since you cannot 
 flush partial lists
 [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
 put() call returns with a local exception (say empty Put) and then you have 2 
 that are in the buffer
 [Mar-17 9:21 AM] Lars George: but how to you force commit them?
 [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
 that is gone now
 [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
 [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
 BufferedMutation neither
 [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
 call close()
 [Mar-17 9:23 AM] Lars George: that is just weird to explain
 {code}
 So, Table needs to get flush back or we deprecate this method or it flushes 
 immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13235) Revisit the security auditing semantics.

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368198#comment-14368198
 ] 

Hadoop QA commented on HBASE-13235:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705448/HBASE-13235_v4.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705448

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13301//console

This message is automatically generated.

 Revisit the security auditing semantics.
 

 Key: HBASE-13235
 URL: https://issues.apache.org/jira/browse/HBASE-13235
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
 Attachments: HBASE-13235.patch, HBASE-13235_v2.patch, 
 HBASE-13235_v2.patch, HBASE-13235_v3.patch, HBASE-13235_v4.patch


 More specifically, the following things need a closer look. (Will include 
 more based on feedback and/or suggestions)
 * Table name (say test) instead of fully qualified table name(default:test) 
 being used.
 * Right now, we're using the scope to be similar to arguments for operation. 
 Would be better to decouple the arguments for operation and scope involved in 
 checking. For e.g. say for createTable, we have the following audit log
 {code}
 Access denied for user esteban; reason: Insufficient permissions; remote 
 address: /10.20.30.1; request: createTable; context: (user=srikanth@XXX, 
 scope=default, action=CREATE)
 {code}
 The 

[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2015-03-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368199#comment-14368199
 ] 

Hudson commented on HBASE-11195:


SUCCESS: Integrated in HBase-0.94 #1466 (See 
[https://builds.apache.org/job/HBase-0.94/1466/])
HBASE-11195 Addendum for TestHeapSize. (larsh: rev 
260f2137bdb8b4ae839f5cc285509f34e31a006b)
* src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0, 0.94.26, 0.98.10
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27

 Attachments: HBASE-11195-0.94.patch, HBASE-11195-0.98.patch, 
 HBASE-11195-0.98.v1.patch, HBASE-11195.patch, HBASE-11195.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13275) Setting hbase.security.authorization to false does not disable authorization when AccessController is in the coprocessor class list

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368238#comment-14368238
 ] 

Andrew Purtell commented on HBASE-13275:


We'll need a companion change in the VisibilityController too.

The presence or absence of the coprocessors in the system or table coprocessor 
list has been serving as the authorization toggle.

I suppose an argument against any fix beyond documentation is there is no 
utility of having the coprocessors installed but inactive. 

 Setting hbase.security.authorization to false does not disable authorization 
 when AccessController is in the coprocessor class list
 ---

 Key: HBASE-13275
 URL: https://issues.apache.org/jira/browse/HBASE-13275
 Project: HBase
  Issue Type: Bug
Reporter: William Watson
Assignee: Andrew Purtell

 According to the docs provided by Cloudera (we're not running Cloudera, BTW), 
 this is the list of configs to enable authorization in HBase:
 {code}
 property
  namehbase.security.authorization/name
  valuetrue/value
 /property
 property
  namehbase.coprocessor.master.classes/name
  valueorg.apache.hadoop.hbase.security.access.AccessController/value
 /property
 property
  namehbase.coprocessor.region.classes/name
  
 valueorg.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController/value
 /property
 {code}
 We wanted to then disable authorization but simply setting 
 hbase.security.authorization to false did not disable the authorization



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13200) Improper configuration can leads to endless lease recovery during failover

2015-03-18 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-13200:

Fix Version/s: 2.0.0

 Improper configuration can leads to endless lease recovery during failover
 --

 Key: HBASE-13200
 URL: https://issues.apache.org/jira/browse/HBASE-13200
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Reporter: He Liangliang
Assignee: He Liangliang
 Fix For: 2.0.0

 Attachments: HBASE-13200.patch


 When a node (DN+RS) has machine/OS level failure, another RS will try to do 
 lease recovery for the log file. It will retry for every 
 hbase.lease.recovery.dfs.timeout (default to 61s) from the second time. When 
 the hdfs configuration is not properly configured (e.g. socket connection 
 timeout) and without patch HDFS-4721, the lease recovery time can exceeded 
 the timeout specified by hbase.lease.recovery.dfs.timeout. This will lead to  
 endless retries and preemptions until the final timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13216) Add version info in RPC connection header

2015-03-18 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-13216:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Add version info in RPC connection header
 -

 Key: HBASE-13216
 URL: https://issues.apache.org/jira/browse/HBASE-13216
 Project: HBase
  Issue Type: Improvement
  Components: Client, rpc
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13216-v1.diff, HBASE-13216-v2.diff, 
 HBASE-13216-v3.diff, HBASE-13216-v4.diff


 In the operation of a cluster, we usually want to know which clients are 
 using the HBase client with critical bugs or too old version we will not 
 support in future.
 By adding version info in RPC connection header, we can get these 
 informations from audit log and promote them upgrade before a deadline.
 Discussions and suggestions are welcomed. Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-13273:

Attachment: HBASE-13273.patch

patch (testing on object identity in fact, as Result doesn't implement 
equals(), which is what we need in this case)

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12

 Attachments: HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov reassigned HBASE-13273:
---

Assignee: Mikhail Antonov

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
Assignee: Mikhail Antonov
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-13273:

Status: Patch Available  (was: Open)

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.0
Reporter: stack
Assignee: Mikhail Antonov
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13270) Setter for Result#getStats is #addResults; confusing!

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-13270:

Attachment: HBASE-13270.patch

trivial patch

 Setter for Result#getStats is #addResults; confusing!
 -

 Key: HBASE-13270
 URL: https://issues.apache.org/jira/browse/HBASE-13270
 Project: HBase
  Issue Type: Improvement
Reporter: stack
  Labels: beginner
 Attachments: HBASE-13270.patch


 Below is our [~larsgeorge] on a finding he made reviewing our API:
 Result class having getStats() and addResults(Stats) makes little sense...
 ...the naming is just weird. You have a getStats() getter and an 
 addResults(Stats) setter???
 ...Especially in the Result class and addResult() is plain misleading...
 This issue is about deprecating addResults and replacing it with addStats in 
 its place.
 The getStats/addResult is recent. It came in with:
 {code}
 commit a411227b0ebf78b4ee8ae7179e162b54734e77de
 Author: Jesse Yates jesse.k.ya...@gmail.com
 Date:   Tue Oct 28 16:14:16 2014 -0700
 HBASE-5162 Basic client pushback mechanism
 ...
 {code}
 RegionLoadStats don't belong in Result if you ask me but better in the 
 enveloping on invocations... but that is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13035) [0.98] Backport HBASE-12867 - Shell does not support custom replication endpoint specification

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13035:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 [0.98] Backport HBASE-12867 - Shell does not support custom replication 
 endpoint specification
 --

 Key: HBASE-13035
 URL: https://issues.apache.org/jira/browse/HBASE-13035
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
 Fix For: 1.0.1, 0.98.13






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-12816) GC logs are lost upon Region Server restart if GCLogFileRotation is enabled

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12816:
---
Comment: was deleted

(was: Moving to 0.98.11)

 GC logs are lost upon Region Server restart if GCLogFileRotation is enabled
 ---

 Key: HBASE-12816
 URL: https://issues.apache.org/jira/browse/HBASE-12816
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-12816.patch


 When -XX:+UseGCLogFileRotation is used gc log files end with .gc.0 instead of 
 .gc.  hbase_rotate_log () in hbase-daemon.sh does not handle this correctly 
 and hence when a RS is restarted old gc logs are lost(overwritten).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12891) Parallel execution for Hbck checkRegionConsistency

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12891:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Parallel execution for Hbck checkRegionConsistency
 --

 Key: HBASE-12891
 URL: https://issues.apache.org/jira/browse/HBASE-12891
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Affects Versions: 2.0.0, 0.98.10, 1.1.0
Reporter: churro morales
Assignee: churro morales
  Labels: performance, scalability
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: HBASE-12891-v1.patch, HBASE-12891.98.patch, 
 HBASE-12891.patch, HBASE-12891.patch, hbase-12891-addendum1.patch


 We have a lot of regions on our cluster ~500k and noticed that hbck took 
 quite some time in checkAndFixConsistency().  [~davelatham] patched our 
 cluster to do this check in parallel to speed things up.  I'll attach the 
 patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12816) GC logs are lost upon Region Server restart if GCLogFileRotation is enabled

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12816:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 GC logs are lost upon Region Server restart if GCLogFileRotation is enabled
 ---

 Key: HBASE-12816
 URL: https://issues.apache.org/jira/browse/HBASE-12816
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-12816.patch


 When -XX:+UseGCLogFileRotation is used gc log files end with .gc.0 instead of 
 .gc.  hbase_rotate_log () in hbase-daemon.sh does not handle this correctly 
 and hence when a RS is restarted old gc logs are lost(overwritten).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368251#comment-14368251
 ] 

Josh Elser commented on HBASE-13262:


Ok, been a while since I posted some progress, here's my current understanding 
of things and hopefully an easier to grok statement of the problem:

When clients request a batch of rows which is larger than the server is 
configured to return (often, when the client
does not explicitly set a limit to the results to be returned from the server), 
the client will incorrectly treat this
as all data in the current region has been exhausted. This goes back to what 
[~jonathan.lawlor] pointed out about
clients and servers needing to stay in sync WRT the size of a batch of 
{{Result}}s. The client ultimately requests the
server return a batch of size 'hbase.client.scanner.max.result.size' and then 
believe that the server returned less data
than that limit.

A client-side workaround to the problem is to reduce the number of rows 
requested on the {{Scan}} via
({{Scan#setCaching(int)}}. Setting this value sufficiently low enough (for my 
test code, anything less than 1000 seems
to do the trick) will cause the server to flush the results back to the client 
before the server gets close
to the size limit which would cause the client to do the wrong thing.

I still don't completely understand what is causing the difference on the 
server-side in the first place (over 0.98). I
need to dig more there to understand things. I'm not sure if I'm just missing 
somewhere that
{{CellUtil#estimatedHeapSizeOf(Cell)}} isn't being used, or if some size is 
bubbling up through the {{NextState}} via
the {{KeyValueHeap}} (and thus MemStores or StoreFiles), or something entirely 
different.

Ultimately, the underlying problem is likely best addressed from the stance 
that a scanner shouldn't be performing
special logic based on the size of the batch of data returned from a server. In 
other words, the
client should not be making logic decisions based solely on the size or length 
of the {{Result[]}} it receives.

The server already maintains a nice enum of the reason which it returns a batch 
of results to a client via
{{NextState$State}}. The server has the answer to our question when returns a 
batch: is this batch return due
a limitation on size of this batch (either length or bytes)?

I'm currently of the opinion that it's ideal to pass this information back to 
the client via the {{ScanResult}}.
Ignoring wire-version issues for the moment, this means that clients would rely 
on this new enum to determine when
there is more data to read from a Region and when a Region is exhausted 
(instead of the size and length checks of the
{{Result[]}}.

This approach wouldn't break 0.98 clients against 1.x; however, it also 
wouldn't address the underlying problem of the
client guessing at what to do based on the characteristics of the {{Result[]}} 
when it is unaware of the existence of
this new field in the protobuf. Given my understanding of the problem, 0.98 
clients running against 1.x *could* see
this problem, although I have not tested that to confirm it happens.

Obviously, I need to do some more digging as to where the mismatch in size is 
coming from (unless I missed it from
Jonathan earlier on) before I get a patch. Thoughts/comments welcome meanwhile.

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368268#comment-14368268
 ] 

Andrew Purtell commented on HBASE-13262:


bq. This approach wouldn't break 0.98 clients against 1.x; however, it also 
wouldn't address the underlying problem of the client guessing at what to do 
based on the characteristics of the {{Result[]}} when it is unaware of the 
existence of this new field in the protobuf. Given my understanding of the 
problem, 0.98 clients running against 1.x *could* see this problem, although I 
have not tested that to confirm it happens.

Wire compatibility and a default configuration in 1.0.x that mitigates the 
problem until a rolling upgrade is completed could be good enough. Additional 
comment reserved until you come back with results from more digging. 

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13276) Fix incorrect condition for minimum block locality in 0.98

2015-03-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368266#comment-14368266
 ] 

Hudson commented on HBASE-13276:


SUCCESS: Integrated in HBase-0.98 #907 (See 
[https://builds.apache.org/job/HBase-0.98/907/])
HBASE-13276 Fix incorrect condition for minimum block locality in 0.98. (churro 
morales and larsh) (larsh: rev dfb015d68288d090682308ffcc61badd8a821bb7)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/RatioBasedCompactionPolicy.java


 Fix incorrect condition for minimum block locality in 0.98
 --

 Key: HBASE-13276
 URL: https://issues.apache.org/jira/browse/HBASE-13276
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Critical
 Fix For: 0.98.12

 Attachments: HBASE-11195-0.98.v1.patch


 0.98 only. Parent somehow was incorrect. One-liner to fix it.
 But it's critical as we perform potentially _way_ more compactions now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13114) [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta

2015-03-18 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-13114:
--
Attachment: 0001-UNITTEST-TestEnableTableHandler.testDeleteForSureCle.patch

 [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 -

 Key: HBASE-13114
 URL: https://issues.apache.org/jira/browse/HBASE-13114
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
 Attachments: 
 0001-UNITTEST-TestEnableTableHandler.testDeleteForSureCle.patch, 13114.txt


 I've seen this fail a few times. It just happened now on internal rig.  
 Looking into it
 {code}
 REGRESSION:  
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 Error Message:
 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at org.junit.Assert.assertEquals(Assert.java:542)
 at 
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta(TestEnableTableHandler.java:151)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13114) [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta

2015-03-18 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-13114:
--
Status: Patch Available  (was: Open)

 [UNITTEST] TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 -

 Key: HBASE-13114
 URL: https://issues.apache.org/jira/browse/HBASE-13114
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
 Attachments: 
 0001-UNITTEST-TestEnableTableHandler.testDeleteForSureCle.patch, 13114.txt


 I've seen this fail a few times. It just happened now on internal rig.  
 Looking into it
 {code}
 REGRESSION:  
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta
 Error Message:
 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at org.junit.Assert.assertEquals(Assert.java:542)
 at 
 org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testDeleteForSureClearsAllTableRowsFromMeta(TestEnableTableHandler.java:151)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368346#comment-14368346
 ] 

Hadoop QA commented on HBASE-13273:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12705471/HBASE-13273.patch
  against master branch at commit f9a17edc252a88c5a1a2c7764e3f9f65623e0ced.
  ATTACHMENT ID: 12705471

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13303//console

This message is automatically generated.

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
Assignee: Mikhail Antonov
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13271) Table#puts(ListPut) operation is indeterminate; remove!

2015-03-18 Thread Solomon Duskis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368044#comment-14368044
 ] 

Solomon Duskis commented on HBASE-13271:


{quote}
But it seems that Lars issue is with autoflush=false?
{quote}

I certainly don't want to put words in lars' mouth, but I think that the case 
at hand is the default autoflush=true.  In the default case, if there's an 
exception in the bufferedMutator.put(puts), then bufferedMutator.flush() method 
is never invoked.  That leaves the possibility of some puts being left over in 
the bufferedMutator's buffer, and the user will have no way of knowing.  After 
that initial exception, there's no good way to clear the buffer.  If one calls 
Table.put(put) after that initial put(puts) failure, there still might be 
remnants of the previous call.  That might cause additional exceptions 
unrelated to the current put(put) operation.  

I probably should add a test case for this scenario...

 Table#puts(ListPut) operation is indeterminate; remove!
 -

 Key: HBASE-13271
 URL: https://issues.apache.org/jira/browse/HBASE-13271
 Project: HBase
  Issue Type: Improvement
  Components: API
Affects Versions: 1.0.0
Reporter: stack

 Another API issue found by [~larsgeorge]:
 Table.put(ListPut) is questionable after the API change.
 {code}
 [Mar-17 9:21 AM] Lars George: Table.put(ListPut) is weird since you cannot 
 flush partial lists
 [Mar-17 9:21 AM] Lars George: Say out of 5 the third is broken, then the 
 put() call returns with a local exception (say empty Put) and then you have 2 
 that are in the buffer
 [Mar-17 9:21 AM] Lars George: but how to you force commit them?
 [Mar-17 9:22 AM] Lars George: In the past you would call flushCache(), but 
 that is gone now
 [Mar-17 9:22 AM] Lars George: and flush() is not available on a Table
 [Mar-17 9:22 AM] Lars George: And you cannot access the underlying 
 BufferedMutation neither
 [Mar-17 9:23 AM] Lars George: You can *only* add more Puts if you can, or 
 call close()
 [Mar-17 9:23 AM] Lars George: that is just weird to explain
 {code}
 So, Table needs to get flush back or we deprecate this method or it flushes 
 immediately and does not return until complete in the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13262) ResultScanner doesn't return all rows in Scan

2015-03-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368073#comment-14368073
 ] 

Josh Elser commented on HBASE-13262:


The concern is definitely noted, but thank you for being clear.

 ResultScanner doesn't return all rows in Scan
 -

 Key: HBASE-13262
 URL: https://issues.apache.org/jira/browse/HBASE-13262
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0, 1.1.0
 Environment: Single node, pseduo-distributed 1.1.0-SNAPSHOT
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0, 1.1.0

 Attachments: testrun_0.98.txt, testrun_branch1.0.txt


 Tried to write a simple Java client again 1.1.0-SNAPSHOT.
 * Write 1M rows, each row with 1 family, and 10 qualifiers (values [0-9]), 
 for a total of 10M cells written
 * Read back the data from the table, ensure I saw 10M cells
 Running it against {{04ac1891}} (and earlier) yesterday, I would get ~20% of 
 the actual rows. Running against 1.0.0, returns all 10M records as expected.
 [Code I was 
 running|https://github.com/joshelser/hbase-hwhat/blob/master/src/main/java/hbase/HBaseTest.java]
  for the curious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13273) Make Result.EMPTY_RESULT read-only; currently it can be modified

2015-03-18 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-13273:

Attachment: HBASE-13273.patch

added trivial test case

 Make Result.EMPTY_RESULT read-only; currently it can be modified
 

 Key: HBASE-13273
 URL: https://issues.apache.org/jira/browse/HBASE-13273
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 1.0.0
Reporter: stack
  Labels: beginner
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.12

 Attachments: HBASE-13273.patch, HBASE-13273.patch


 Again from [~larsgeorge]
 Result result2 = Result.EMPTY_RESULT;
 System.out.println(result2);
 result2.copyFrom(result1);
 System.out.println(result2);
 What do you think happens when result1 has cells? Yep, you just modified the 
 shared public EMPTY_RESULT to be not empty anymore.
 Fix. Result should be non-modifiable post-construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12945) Port: New master API to track major compaction completion to 0.98

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12945:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Port: New master API to track major compaction completion to 0.98
 -

 Key: HBASE-12945
 URL: https://issues.apache.org/jira/browse/HBASE-12945
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
 Fix For: 0.98.13






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12938) Upgrade HTrace to a recent supportable incubating version

2015-03-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12938:
---
Fix Version/s: (was: 0.98.12)
   0.98.13

 Upgrade HTrace to a recent supportable incubating version
 -

 Key: HBASE-12938
 URL: https://issues.apache.org/jira/browse/HBASE-12938
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.13


 In 0.98 we have an old htrace (still using the org.cloudera.htrace package) 
 and since the introduction of htrace code, htrace itself first moved to 
 org.htrace, then became an incubating project. I filed this as a bug because 
 the HTrace version we reference in 0.98 is of little to no use going forward. 
 Unfortunately we must make a disruptive change, although it looks to be 
 mostly fixing up imports, we expose no HTrace classes to HBase configuration, 
 and where we extend HTrace classes in our code, those HBase classes are in 
 hbase-server and not tagged for public consumption.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >