[jira] [Commented] (HBASE-15487) Deletions done via BulkDeleteEndpoint make past data re-appear

2016-03-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203765#comment-15203765
 ] 

ramkrishna.s.vasudevan commented on HBASE-15487:


Correct as [~anoop.hbase] says. 
As [~herberts] says the thing is the way we see how Delete handles version and 
how Scans handle version. Compaction would have rewritten only one version of 
the data ie. here 0x1 and after that if delete happens only 0x1 gets deleted.

> Deletions done via BulkDeleteEndpoint make past data re-appear
> --
>
> Key: HBASE-15487
> URL: https://issues.apache.org/jira/browse/HBASE-15487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Mathias Herberts
> Attachments: HBaseTest.java, HBaseTest.java
>
>
> The Warp10 (www.warp10.io) time series database uses HBase as its underlying 
> data store. The deletion of ranges of cells is performed using the 
> BulkDeleteEndpoint.
> In the following scenario the deletion does not appear to be working properly:
> The table 't' is created with a single version using:
> create 't', {NAME => 'v', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS=> '1', MIN_VERSIONS => '0', TTL => 
> '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY 
> =>'false', BLOCKCACHE => 'true'}
> We write a cell at row '0x00', colfam 'v', colq '', value 0x0
> We write the same cell again with value 0x1
> A scan will return a single value 0x1
> We then perform a delete using the BulkDeleteEndpoint and a Scan with a 
> DeleteType of 'VERSION'
> The reported number of deleted versions is 1 (which is coherent given the 
> table was created with MAX_VERSIONS=1)
> The same scan as the one performed before the delete returns a single value 
> 0x0.
> This seems to happen when all operations are performed against the memstore.
> A regular delete will remove the cell and a later scan won't show it.
> I'll attach a test which demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15487) Deletions done via BulkDeleteEndpoint make past data re-appear

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203756#comment-15203756
 ] 

Anoop Sam John commented on HBASE-15487:


bq.builder.setDeleteType(DeleteType.VERSION);
Oh I noticed it now.  Why delete type is version when u actually wanted to 
delete one row:cf:qual  value fully?  Version type to be used when u want to 
delete one particular version of a cell.

Yes as per the thinking it is correct to put even version also.. Because u have 
1 as max versions for the table.  So when new data was put ideally it has to 
shadow the old one.  So even if latest version only deleted, it should work 
correctly.
But this is a know issue in HBase..  When version is 1, the actual removal of 
old version data will happen during a compaction.  So before u apply this bulk 
delete op, if a compaction happened, you would have been seeing the correct 
result you wanted..

For ur case I think it can be easily fixed by removing this DeleteType.VERSION 
type set.

> Deletions done via BulkDeleteEndpoint make past data re-appear
> --
>
> Key: HBASE-15487
> URL: https://issues.apache.org/jira/browse/HBASE-15487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Mathias Herberts
> Attachments: HBaseTest.java, HBaseTest.java
>
>
> The Warp10 (www.warp10.io) time series database uses HBase as its underlying 
> data store. The deletion of ranges of cells is performed using the 
> BulkDeleteEndpoint.
> In the following scenario the deletion does not appear to be working properly:
> The table 't' is created with a single version using:
> create 't', {NAME => 'v', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS=> '1', MIN_VERSIONS => '0', TTL => 
> '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY 
> =>'false', BLOCKCACHE => 'true'}
> We write a cell at row '0x00', colfam 'v', colq '', value 0x0
> We write the same cell again with value 0x1
> A scan will return a single value 0x1
> We then perform a delete using the BulkDeleteEndpoint and a Scan with a 
> DeleteType of 'VERSION'
> The reported number of deleted versions is 1 (which is coherent given the 
> table was created with MAX_VERSIONS=1)
> The same scan as the one performed before the delete returns a single value 
> 0x0.
> This seems to happen when all operations are performed against the memstore.
> A regular delete will remove the cell and a later scan won't show it.
> I'll attach a test which demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15494) Close obviated PRs on github

2016-03-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15494:

Status: Patch Available  (was: Open)

git won't let me use format-patch on an empty commit, so here's what the commit 
message looks like:

{code}
commit f4d7aac0e7e565cccbae0a820f4a065886298efd
Author: Sean Busbey 
Date:   Mon Mar 21 00:42:05 2016 -0500

HBASE-15494 Close obviated PRs on the GitHub mirror.

  - closes #1 HBASE-1015 obviated by HBASE-14850
  - closes #3 obviated by HBASE-15059
  - closes #17 obviated by HBASE-15223

{code}

I figure this is as close to something to be reviewed as I can get?

> Close obviated PRs on github
> 
>
> Key: HBASE-15494
> URL: https://issues.apache.org/jira/browse/HBASE-15494
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> We have several open PRs on the github mirror that have been obviated.
> Close them via an empty commit with an appropriate message.
> * [PR #1|https://github.com/apache/hbase/pull/1] - is for HBASE-1015, 
> obviated by HBASE-14850
> * [PR #3|https://github.com/apache/hbase/pull/3] - update 0.94 branch for 
> Hadoop 2.5.0 is obviated by HBASE-15059
> * [PR #17|https://github.com/apache/hbase/pull/17] - make TableMapReduceUtil 
> methods for scan to/from string conversion public is obviated by HBASE-15223



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15487) Deletions done via BulkDeleteEndpoint make past data re-appear

2016-03-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203752#comment-15203752
 ] 

ramkrishna.s.vasudevan commented on HBASE-15487:


I tried this out in trunk. 
bq.A regular delete will remove the cell and a later scan won't show it.
This also shows the cell with value 0x0. The scans after bulk delete and after 
regular delete behaved the same. 
With 1.0.3 I doubt whether there should be a behavoiur change. May be I am 
missing something. Will check more. 

> Deletions done via BulkDeleteEndpoint make past data re-appear
> --
>
> Key: HBASE-15487
> URL: https://issues.apache.org/jira/browse/HBASE-15487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Mathias Herberts
> Attachments: HBaseTest.java, HBaseTest.java
>
>
> The Warp10 (www.warp10.io) time series database uses HBase as its underlying 
> data store. The deletion of ranges of cells is performed using the 
> BulkDeleteEndpoint.
> In the following scenario the deletion does not appear to be working properly:
> The table 't' is created with a single version using:
> create 't', {NAME => 'v', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS=> '1', MIN_VERSIONS => '0', TTL => 
> '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY 
> =>'false', BLOCKCACHE => 'true'}
> We write a cell at row '0x00', colfam 'v', colq '', value 0x0
> We write the same cell again with value 0x1
> A scan will return a single value 0x1
> We then perform a delete using the BulkDeleteEndpoint and a Scan with a 
> DeleteType of 'VERSION'
> The reported number of deleted versions is 1 (which is coherent given the 
> table was created with MAX_VERSIONS=1)
> The same scan as the one performed before the delete returns a single value 
> 0x0.
> This seems to happen when all operations are performed against the memstore.
> A regular delete will remove the cell and a later scan won't show it.
> I'll attach a test which demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15487) Deletions done via BulkDeleteEndpoint make past data re-appear

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203750#comment-15203750
 ] 

Anoop Sam John commented on HBASE-15487:


{code}
if (minage > 0) {
long maxts = System.currentTimeMillis() - minage + 1;
scan.setTimeRange(0, maxts);
  }
{code}
So what is the value of this 'minage'  in ur test?  Can you try with out 
setting any time range on the Scan?

The diff here I can see is, in case of normal delete, we just say delete a 
row:column data.  So it will delete ALL the versions written so far. (Yes the 
max versions for table is 1. Still there are 2 versions of Cell present in data)
In case of bulk delete, which cell(s) will be deleted, depends on the fact that 
which cell(s) the scan return.


> Deletions done via BulkDeleteEndpoint make past data re-appear
> --
>
> Key: HBASE-15487
> URL: https://issues.apache.org/jira/browse/HBASE-15487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Mathias Herberts
> Attachments: HBaseTest.java, HBaseTest.java
>
>
> The Warp10 (www.warp10.io) time series database uses HBase as its underlying 
> data store. The deletion of ranges of cells is performed using the 
> BulkDeleteEndpoint.
> In the following scenario the deletion does not appear to be working properly:
> The table 't' is created with a single version using:
> create 't', {NAME => 'v', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS=> '1', MIN_VERSIONS => '0', TTL => 
> '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY 
> =>'false', BLOCKCACHE => 'true'}
> We write a cell at row '0x00', colfam 'v', colq '', value 0x0
> We write the same cell again with value 0x1
> A scan will return a single value 0x1
> We then perform a delete using the BulkDeleteEndpoint and a Scan with a 
> DeleteType of 'VERSION'
> The reported number of deleted versions is 1 (which is coherent given the 
> table was created with MAX_VERSIONS=1)
> The same scan as the one performed before the delete returns a single value 
> 0x0.
> This seems to happen when all operations are performed against the memstore.
> A regular delete will remove the cell and a later scan won't show it.
> I'll attach a test which demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15478) add comments to FSHLog explaining why syncRunnerIndex won't overflow

2016-03-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15478:

   Resolution: Fixed
Fix Version/s: 1.1.5
   1.4.0
   1.0.4
   1.2.1
   1.3.0
   Status: Resolved  (was: Patch Available)

> add comments to FSHLog explaining why syncRunnerIndex won't overflow
> 
>
> Key: HBASE-15478
> URL: https://issues.apache.org/jira/browse/HBASE-15478
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.0.4, 1.4.0, 1.1.5
>
> Attachments: HBASE-15478.1.patch
>
>
> A comment near the fix for HBASE-14759 will make the code easier for folks to 
> follow later down the road.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15478) add comments to FSHLog explaining why syncRunnerIndex won't overflow

2016-03-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203749#comment-15203749
 ] 

Sean Busbey commented on HBASE-15478:
-

thanks for the review!

> add comments to FSHLog explaining why syncRunnerIndex won't overflow
> 
>
> Key: HBASE-15478
> URL: https://issues.apache.org/jira/browse/HBASE-15478
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.0.4, 1.4.0, 1.1.5
>
> Attachments: HBASE-15478.1.patch
>
>
> A comment near the fix for HBASE-14759 will make the code easier for folks to 
> follow later down the road.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203743#comment-15203743
 ] 

Anoop Sam John commented on HBASE-15484:


Sure. I will review it tonight IST.  

> Correct the semantic of batch and partial
> -
>
> Key: HBASE-15484
> URL: https://issues.apache.org/jira/browse/HBASE-15484
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15484-v1.patch
>
>
> Follow-up to HBASE-15325, as discussed, the meaning of setBatch and 
> setAllowPartialResults should not be same. We should not regard setBatch as 
> setAllowPartialResults.
> And isPartial should be define accurately.
> (Considering getBatch==MaxInt if we don't setBatch.) If 
> result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't 
> setAllowPartialResults(true), isPartial should always be false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15491) Reuse byte buffers in AsyncRpcClient

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203742#comment-15203742
 ] 

Anoop Sam John commented on HBASE-15491:


I mean, I could see within IPCUtil itself , there is a call to putBuffer in 
BBBPool. Why?  Can we make it such that the put back call is in one place? 
WHere u got it from pool?

Why I asked abt the GC issue is,  we were doing some PoC for reusing the pooled 
BBs fro reading the reqs in RpcServer.  Ya can see that it reduced the GC 
frequency.  But the GC pause time is larger. (I mean the young gen GC)..  So I 
was asking whether u also see any such pattern after this BB reuse change

> Reuse byte buffers in AsyncRpcClient
> 
>
> Key: HBASE-15491
> URL: https://issues.apache.org/jira/browse/HBASE-15491
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15491-v1.patch
>
>
> IPCUtil.buildCellBlock is used by both server and client. Server provides 
> BoundedByteBufferPool for buffers reuse, client code does not do that. This 
> results in additional memory pressure on a client side, because buffers are 
> allocated on every call to IPCUtil.buildCellBlock.
> My own local tests (with patch) show approximately 8-10% reduction in object 
> allocation rate on a client side (with HBASE-15479 as well). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15494) Close obviated PRs on github

2016-03-20 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-15494:
---

 Summary: Close obviated PRs on github
 Key: HBASE-15494
 URL: https://issues.apache.org/jira/browse/HBASE-15494
 Project: HBase
  Issue Type: Task
  Components: community
Reporter: Sean Busbey
Assignee: Sean Busbey


We have several open PRs on the github mirror that have been obviated.

Close them via an empty commit with an appropriate message.

* [PR #1|https://github.com/apache/hbase/pull/1] - is for HBASE1015, obviated 
by HBASE-14850
* [PR #3|https://github.com/apache/hbase/pull/3] - update 0.94 branch for 
Hadoop 2.5.0 is obviated by HBASE-15059
* [PR #17|https://github.com/apache/hbase/pull/17] - make TableMapReduceUtil 
methods for scan to/from string conversion public is obviated by HBASE-15223



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15494) Close obviated PRs on github

2016-03-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15494:

Description: 
We have several open PRs on the github mirror that have been obviated.

Close them via an empty commit with an appropriate message.

* [PR #1|https://github.com/apache/hbase/pull/1] - is for HBASE-1015, obviated 
by HBASE-14850
* [PR #3|https://github.com/apache/hbase/pull/3] - update 0.94 branch for 
Hadoop 2.5.0 is obviated by HBASE-15059
* [PR #17|https://github.com/apache/hbase/pull/17] - make TableMapReduceUtil 
methods for scan to/from string conversion public is obviated by HBASE-15223

  was:
We have several open PRs on the github mirror that have been obviated.

Close them via an empty commit with an appropriate message.

* [PR #1|https://github.com/apache/hbase/pull/1] - is for HBASE1015, obviated 
by HBASE-14850
* [PR #3|https://github.com/apache/hbase/pull/3] - update 0.94 branch for 
Hadoop 2.5.0 is obviated by HBASE-15059
* [PR #17|https://github.com/apache/hbase/pull/17] - make TableMapReduceUtil 
methods for scan to/from string conversion public is obviated by HBASE-15223


> Close obviated PRs on github
> 
>
> Key: HBASE-15494
> URL: https://issues.apache.org/jira/browse/HBASE-15494
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> We have several open PRs on the github mirror that have been obviated.
> Close them via an empty commit with an appropriate message.
> * [PR #1|https://github.com/apache/hbase/pull/1] - is for HBASE-1015, 
> obviated by HBASE-14850
> * [PR #3|https://github.com/apache/hbase/pull/3] - update 0.94 branch for 
> Hadoop 2.5.0 is obviated by HBASE-15059
> * [PR #17|https://github.com/apache/hbase/pull/17] - make TableMapReduceUtil 
> methods for scan to/from string conversion public is obviated by HBASE-15223



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial

2016-03-20 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203736#comment-15203736
 ] 

Phil Yang commented on HBASE-15484:
---

Hi [~anoop.hbase] 
Please help me to review this patch, thanks

> Correct the semantic of batch and partial
> -
>
> Key: HBASE-15484
> URL: https://issues.apache.org/jira/browse/HBASE-15484
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15484-v1.patch
>
>
> Follow-up to HBASE-15325, as discussed, the meaning of setBatch and 
> setAllowPartialResults should not be same. We should not regard setBatch as 
> setAllowPartialResults.
> And isPartial should be define accurately.
> (Considering getBatch==MaxInt if we don't setBatch.) If 
> result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't 
> setAllowPartialResults(true), isPartial should always be false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15493) Default ArrayList size may not be optimal for Mutation

2016-03-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203727#comment-15203727
 ] 

ramkrishna.s.vasudevan commented on HBASE-15493:


I have been trying to make this more accurate atleast in the server side. In my 
case with >10 columns per cell the arraylist expansion was creating lot of 
garbage. But not sure if 2 is going to really help there. I know you are trying 
to take a middle man approach here but still. But getting the most exact 
estimate is much more complex in these cases. 

> Default ArrayList size may not be optimal for Mutation
> --
>
> Key: HBASE-15493
> URL: https://issues.apache.org/jira/browse/HBASE-15493
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, regionserver
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15493-v1.patch
>
>
> {code}
>   List getCellList(byte[] family) {
> List list = this.familyMap.get(family);
> if (list == null) {
>   list = new ArrayList();
> }
> return list;
>   }
> {code}
> Creates list of size 10, this is up to 80 bytes per column family in mutation 
> object. 
> Suggested:
> {code}
>   List getCellList(byte[] family) {
> List list = this.familyMap.get(family);
> if (list == null) {
>   list = new ArrayList(CELL_LIST_INITIAL_CAPACITY);
> }
> return list;
>   }
> {code}
> CELL_LIST_INITIAL_CAPACITY = 2 in the patch, this is debatable. For mutation 
> where every CF has 1 cell, this gives decent reduction in memory allocation 
> rate in both client and server during write workload. ~2%, not a big number, 
> but as I said, already, memory optimization will include many small steps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15493) Default ArrayList size may not be optimal for Mutation

2016-03-20 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15493:
--
Attachment: HBASE-15493-v1.patch

Patch v1 includes as well optimizations in Delete and Append classes.

> Default ArrayList size may not be optimal for Mutation
> --
>
> Key: HBASE-15493
> URL: https://issues.apache.org/jira/browse/HBASE-15493
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, regionserver
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15493-v1.patch
>
>
> {code}
>   List getCellList(byte[] family) {
> List list = this.familyMap.get(family);
> if (list == null) {
>   list = new ArrayList();
> }
> return list;
>   }
> {code}
> Creates list of size 10, this is up to 80 bytes per column family in mutation 
> object. 
> Suggested:
> {code}
>   List getCellList(byte[] family) {
> List list = this.familyMap.get(family);
> if (list == null) {
>   list = new ArrayList(CELL_LIST_INITIAL_CAPACITY);
> }
> return list;
>   }
> {code}
> CELL_LIST_INITIAL_CAPACITY = 2 in the patch, this is debatable. For mutation 
> where every CF has 1 cell, this gives decent reduction in memory allocation 
> rate in both client and server during write workload. ~2%, not a big number, 
> but as I said, already, memory optimization will include many small steps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-1015) HBase Native Client Library

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203724#comment-15203724
 ] 

Hadoop QA commented on HBASE-1015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-1015 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12656601/HBASE-1015-HBase-native-client.patch
 |
| JIRA Issue | HBASE-1015 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1098/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> HBase Native Client Library
> ---
>
> Key: HBASE-1015
> URL: https://issues.apache.org/jira/browse/HBASE-1015
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 0.20.6, 1.0.0
>Reporter: Andrew Purtell
>Assignee: Aditya Kishore
>Priority: Minor
> Attachments: HBASE-1015-HBase-native-client.patch, 
> HBASE-1015-HBase-native-client.patch
>
>
> If via HBASE-794 first class support for talking via Thrift directly to 
> HMaster and HRS is available, then pure C and C++ client libraries are 
> possible. 
> The C client library would wrap a Thrift core. 
> The C++ client library can provide a class hierarchy quite close to 
> o.a.h.h.client and, ideally, identical semantics. It  should be just a 
> wrapper around the C API, for economy.
> Internally to my employer there is a lot of resistance to HBase because many 
> dev teams have a strong C/C++ bias. The real issue however is really client 
> side integration, not a fundamental objection. (What runs server side and how 
> it is managed is a secondary consideration.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15493) Default ArrayList size may not be optimal for Mutation

2016-03-20 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-15493:
-

 Summary: Default ArrayList size may not be optimal for Mutation
 Key: HBASE-15493
 URL: https://issues.apache.org/jira/browse/HBASE-15493
 Project: HBase
  Issue Type: Improvement
  Components: Client, regionserver
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0


{code}
  List getCellList(byte[] family) {
List list = this.familyMap.get(family);
if (list == null) {
  list = new ArrayList();
}
return list;
  }
{code}

Creates list of size 10, this is up to 80 bytes per column family in mutation 
object. 

Suggested:
{code}
  List getCellList(byte[] family) {
List list = this.familyMap.get(family);
if (list == null) {
  list = new ArrayList(CELL_LIST_INITIAL_CAPACITY);
}
return list;
  }
{code}

CELL_LIST_INITIAL_CAPACITY = 2 in the patch, this is debatable. For mutation 
where every CF has 1 cell, this gives decent reduction in memory allocation 
rate in both client and server during write workload. ~2%, not a big number, 
but as I said, already, memory optimization will include many small steps.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-1015) HBase Native Client Library

2016-03-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203717#comment-15203717
 ] 

Sean Busbey commented on HBASE-1015:


last call for concerns with me closing this issue out as obviated by the work 
in HBASE-14850

> HBase Native Client Library
> ---
>
> Key: HBASE-1015
> URL: https://issues.apache.org/jira/browse/HBASE-1015
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 0.20.6, 1.0.0
>Reporter: Andrew Purtell
>Assignee: Aditya Kishore
>Priority: Minor
> Attachments: HBASE-1015-HBase-native-client.patch, 
> HBASE-1015-HBase-native-client.patch
>
>
> If via HBASE-794 first class support for talking via Thrift directly to 
> HMaster and HRS is available, then pure C and C++ client libraries are 
> possible. 
> The C client library would wrap a Thrift core. 
> The C++ client library can provide a class hierarchy quite close to 
> o.a.h.h.client and, ideally, identical semantics. It  should be just a 
> wrapper around the C API, for economy.
> Internally to my employer there is a lot of resistance to HBase because many 
> dev teams have a strong C/C++ bias. The real issue however is really client 
> side integration, not a fundamental objection. (What runs server side and how 
> it is managed is a secondary consideration.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-branch-1.3-v1.patch

branch-1.2/1.3 has not added tests back yet, add whole 
TestPartialResultsFromClientSide only to resolve this issue first.
This patch can apply to 1.2 and 1.3

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-branch-1.1-v1.patch, HBASE-15325-branch-1.3-v1.patch, 
> HBASE-15325-v1.txt, HBASE-15325-v10.patch, HBASE-15325-v11.patch, 
> HBASE-15325-v2.txt, HBASE-15325-v3.txt, HBASE-15325-v5.txt, 
> HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, 
> HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, 
> HBASE-15325-v7.patch, HBASE-15325-v8.patch, HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15491) Reuse byte buffers in AsyncRpcClient

2016-03-20 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203706#comment-15203706
 ] 

Vladimir Rodionov commented on HBASE-15491:
---

{quote}
Why we have a putBuffer() call in IPCUtil itself? Can we avoid the special 
casing? I mean handle the put back to pool in one place
So this will reduce lots of garbage and so the frequency of young GC.
{quote}

Not sure, I am following you here, [~anoop.hbase]

{quote}
 You observed the GC after this? Will it increase the avg young GC pause time?
{quote}

I observed decreased memory allocation rate by 8-10% on a client side  (after 
HBASE-15479). There is no silver bullet here in memory opt (after HBASE-15479). 
Every new improvement will get us several, max 5-7 %.  



> Reuse byte buffers in AsyncRpcClient
> 
>
> Key: HBASE-15491
> URL: https://issues.apache.org/jira/browse/HBASE-15491
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15491-v1.patch
>
>
> IPCUtil.buildCellBlock is used by both server and client. Server provides 
> BoundedByteBufferPool for buffers reuse, client code does not do that. This 
> results in additional memory pressure on a client side, because buffers are 
> allocated on every call to IPCUtil.buildCellBlock.
> My own local tests (with patch) show approximately 8-10% reduction in object 
> allocation rate on a client side (with HBASE-15479 as well). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14442) MultiTableInputFormatBase.getSplits dosenot build split for a scan whose startRow=stopRow=(startRow of a region)

2016-03-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203700#comment-15203700
 ] 

Sean Busbey commented on HBASE-14442:
-

is this still active? if not, I'd like to close it as stale.

> MultiTableInputFormatBase.getSplits dosenot build split for a scan whose 
> startRow=stopRow=(startRow of a region)
> 
>
> Key: HBASE-14442
> URL: https://issues.apache.org/jira/browse/HBASE-14442
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.1.2
>Reporter: Nathan
>Assignee: Nathan
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> I created a Scan whose startRow and stopRow are the same with a region's 
> startRow, then I found no map was built. 
> The following is the source code of this condtion:
> (startRow.length == 0 || keys.getSecond()[i].length == 0 ||
> Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) &&
> (stopRow.length == 0 || Bytes.compareTo(stopRow,
> keys.getFirst()[i]) > 0)
> I think  a "=" should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-branch-1-v1.patch

reupload branch-1 to trigger QA

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-branch-1.1-v1.patch, HBASE-15325-v1.txt, HBASE-15325-v10.patch, 
> HBASE-15325-v11.patch, HBASE-15325-v2.txt, HBASE-15325-v3.txt, 
> HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, 
> HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, 
> HBASE-15325-v6.txt, HBASE-15325-v7.patch, HBASE-15325-v8.patch, 
> HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: (was: HBASE-15325-branch-1-v1.patch)

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-branch-1.1-v1.patch, HBASE-15325-v1.txt, HBASE-15325-v10.patch, 
> HBASE-15325-v11.patch, HBASE-15325-v2.txt, HBASE-15325-v3.txt, 
> HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, 
> HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, 
> HBASE-15325-v6.txt, HBASE-15325-v7.patch, HBASE-15325-v8.patch, 
> HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-branch-1.1-v1.patch

patch for branch-1.1

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-branch-1.1-v1.patch, HBASE-15325-v1.txt, HBASE-15325-v10.patch, 
> HBASE-15325-v11.patch, HBASE-15325-v2.txt, HBASE-15325-v3.txt, 
> HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, 
> HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, 
> HBASE-15325-v6.txt, HBASE-15325-v7.patch, HBASE-15325-v8.patch, 
> HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: (was: HBASE-15325-branch-1-v1.patch)

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-v1.txt, HBASE-15325-v10.patch, HBASE-15325-v11.patch, 
> HBASE-15325-v2.txt, HBASE-15325-v3.txt, HBASE-15325-v5.txt, 
> HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, 
> HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, 
> HBASE-15325-v7.patch, HBASE-15325-v8.patch, HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-branch-1-v1.patch

TestPartialResultsFromClientSide in branch-1 is back, upload patch for branch-1

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-branch-1-v1.patch, 
> HBASE-15325-branch-1-v1.patch, HBASE-15325-v1.txt, HBASE-15325-v10.patch, 
> HBASE-15325-v11.patch, HBASE-15325-v2.txt, HBASE-15325-v3.txt, 
> HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, HBASE-15325-v6.2.txt, 
> HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, HBASE-15325-v6.5.txt, 
> HBASE-15325-v6.txt, HBASE-15325-v7.patch, HBASE-15325-v8.patch, 
> HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15491) Reuse byte buffers in AsyncRpcClient

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203658#comment-15203658
 ] 

Anoop Sam John commented on HBASE-15491:


Why we have a putBuffer() call in IPCUtil itself?  Can we avoid the special 
casing?  I mean handle the put back to pool in one place
So this will reduce lots of garbage and so the frequency of young GC.   You 
observed the GC after this?  Will it increase the avg young GC pause time?

> Reuse byte buffers in AsyncRpcClient
> 
>
> Key: HBASE-15491
> URL: https://issues.apache.org/jira/browse/HBASE-15491
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15491-v1.patch
>
>
> IPCUtil.buildCellBlock is used by both server and client. Server provides 
> BoundedByteBufferPool for buffers reuse, client code does not do that. This 
> results in additional memory pressure on a client side, because buffers are 
> allocated on every call to IPCUtil.buildCellBlock.
> My own local tests (with patch) show approximately 8-10% reduction in object 
> allocation rate on a client side (with HBASE-15479 as well). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203655#comment-15203655
 ] 

Hadoop QA commented on HBASE-15477:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 34s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 46s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 43s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 17s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-external-blockcache in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-external-blockcache in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
6s {color} | {color:green} Patch 

[jira] [Commented] (HBASE-15398) Cells loss or disorder when using family essential filter and partial scanning protocol

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203654#comment-15203654
 ] 

Anoop Sam John commented on HBASE-15398:


Checking the code
{code}
if (hasFilterRow) {
  if (LOG.isTraceEnabled()) {
LOG.trace("filter#hasFilterRow is true which prevents partial 
results from being "
+ " formed. Changing scope of limits that may create partials");
  }
  scannerContext.setSizeLimitScope(LimitScope.BETWEEN_ROWS);
  scannerContext.setTimeLimitScope(LimitScope.BETWEEN_ROWS);
}
{code}
So because of this limit scope set, we say we don't allow  in btw ROW return.  
Ya make sense.

But is that only for cells from essential family?  I dont mean in code. This is 
what we wanted?

Because I can see that we call the Filter filterRowCells() and filterRow() once 
we read cells from all essential families.  ie. storeHeap
After this only we go ahead with the joinedHeap.

When we consider SCVF, yes it is fine.  For the decision making of whether a 
Row to be included or not, the cells from essential family is enough.

That means when we read cells from essential families we dont allow to reach 
any limit ie. size or time or batch.   But after that it is allowed.  At least 
as per the other code parts.  But I can not see we reset the setTimeLimitScope.

Now we need to decide what is correct.  I think it make sense to call the 
filter methods after read from storeHeap.

> Cells loss or disorder when using family essential filter and partial 
> scanning protocol
> ---
>
> Key: HBASE-15398
> URL: https://issues.apache.org/jira/browse/HBASE-15398
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Attachments: 15398-test.txt, HBASE-15398-v2.patch, 
> HBASE-15398-v3.patch, HBASE-15398-v4.patch, HBASE-15398-v5.patch, 
> HBASE-15398.v1.txt
>
>
> In RegionScannerImpl, we have two heaps, storeHeap and joinedHeap. If we have 
> a filter and it doesn't apply to all cf, the stores whose families needn't be 
>  filtered will be in joinedHeap. We scan storeHeap first, then joinedHeap, 
> and merge the results and sort and return to client. We need sort because the 
> order of Cell is rowkey/cf/cq/ts and a smaller cf may be in the joinedHeap.
> However, after HBASE-11544 we may transfer partial results when we get 
> SIZE_LIMIT_REACHED_MID_ROW or other similar states. We may return a larger cf 
> first because it is in storeHeap and then a smaller cf because it is in 
> joinedHeap. Server won't hold all cells in a row and client doesn't have a 
> sorting logic. The order of cf in Result for user is wrong.
> And a more critical bug is, if we get a LIMIT_REACHED_MID_ROW on the last 
> cell of a row in storeHeap, we will break scanning in RegionScannerImpl and 
> in populateResult we will change the state to SIZE_LIMIT_REACHED because next 
> peeked cell is next row. But this is only the last cell of one and we have 
> two... And SIZE_LIMIT_REACHED means this Result is not partial (by 
> ScannerContext.partialResultFormed), client will see it and merge them and 
> return to user with losing data of joinedHeap. On next scan we will read next 
> row of storeHeap and joinedHeap is forgotten and never be read...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15349) Update surefire version to 2.19.1

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203571#comment-15203571
 ] 

Hudson commented on HBASE-15349:


FAILURE: Integrated in HBase-Trunk_matrix #792 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/792/])
HBASE-15349 Update surefire version to 2.19.1; Trying a REVERT to see if 
(stack: rev 64204b96c18137de1e71dab04ad2df7f0f7db1c0)
* pom.xml


> Update surefire version to 2.19.1
> -
>
> Key: HBASE-15349
> URL: https://issues.apache.org/jira/browse/HBASE-15349
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15349.patch
>
>
> So that new properties like surefire.excludesFile and includesFile can be 
> used to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15300) Upgrade to zookeeper 3.4.8

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203547#comment-15203547
 ] 

Hadoop QA commented on HBASE-15300:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 36s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 22s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 12m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
27s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 12m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 12s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 11s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 109m 19s 
{color} | {color:green} root in the patch passed. {color} |
| 

[jira] [Updated] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks

2016-03-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15477:
--
Attachment: 15477v4.patch

Fix tests

> Do not save 'next block header' when we cache hfileblocks
> -
>
> Key: HBASE-15477
> URL: https://issues.apache.org/jira/browse/HBASE-15477
> Project: HBase
>  Issue Type: Sub-task
>  Components: BlockCache, Performance
>Reporter: stack
>Assignee: stack
> Attachments: 15366v4.patch, 15477.patch, 15477v2.patch, 
> 15477v3.patch, 15477v3.patch, 15477v4.patch
>
>
> When we read from HDFS, we overread to pick up the next blocks header.
> Doing this saves a seek as we move through the hfile; we save having to
> do an explicit seek just to read the block header every time we need to
> read the body.  We used to read in the next header as part of the
> current blocks buffer. This buffer was then what got persisted to
> blockcache; so we were over-persisting wrtiting out our block plus the
> next blocks' header (overpersisting 33 bytes). Parse of HFileBlock
> complicated by this extra tail. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203502#comment-15203502
 ] 

Hadoop QA commented on HBASE-15477:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 46s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 12s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 15s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-external-blockcache in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-external-blockcache in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
51s {color} | 

[jira] [Commented] (HBASE-15490) Remove duplicated CompactionThroughputControllerFactory in branch-1

2016-03-20 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203487#comment-15203487
 ] 

Gary Helmling commented on HBASE-15490:
---

Belated +1 from me as well.  Thanks for the fix.

Was this pushed to branch-1.3 as well?  I see the message from Hudson that it 
was integrated in HBase-1.3-IT, but I don't see the corresponding commit in 
branch-1.3.  If not, please push there as well.

cc: [~mantonov]

> Remove duplicated CompactionThroughputControllerFactory in branch-1
> ---
>
> Key: HBASE-15490
> URL: https://issues.apache.org/jira/browse/HBASE-15490
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0
>
> Attachments: HBASE-15490.branch-1.patch, HBASE-15490.patch
>
>
> Currently there're two {{CompactionThroughputControllerFactory}} in our 
> branch-1 code base (one in {{o.a.h.h.regionserver.compactions}} package, the 
> other in {{o.a.h.h.regionserver.throttle}}) and both are in use.
> This is a regression of HBASE-14969 and only exists in branch-1. We should 
> remove the one  in {{o.a.h.h.regionserver.compactions}}, and change the 
> default compaction throughput controller back to 
> {{NoLimitThroughputController}} to keep compatible with previous branch-1 
> version
> Thanks [~ghelmling] for pointing out the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15492) Memory usage optimizations

2016-03-20 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-15492:
-

 Summary: Memory usage optimizations
 Key: HBASE-15492
 URL: https://issues.apache.org/jira/browse/HBASE-15492
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov
 Fix For: 2.0.0


This is master ticket for all memory optimization tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15477) Do not save 'next block header' when we cache hfileblocks

2016-03-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15477:
--
Attachment: 15477v3.patch

Retry after setting back the version of surefire thinking it resposible for all 
the timeouts. See HBASE-15349

> Do not save 'next block header' when we cache hfileblocks
> -
>
> Key: HBASE-15477
> URL: https://issues.apache.org/jira/browse/HBASE-15477
> Project: HBase
>  Issue Type: Sub-task
>  Components: BlockCache, Performance
>Reporter: stack
>Assignee: stack
> Attachments: 15366v4.patch, 15477.patch, 15477v2.patch, 
> 15477v3.patch, 15477v3.patch
>
>
> When we read from HDFS, we overread to pick up the next blocks header.
> Doing this saves a seek as we move through the hfile; we save having to
> do an explicit seek just to read the block header every time we need to
> read the body.  We used to read in the next header as part of the
> current blocks buffer. This buffer was then what got persisted to
> blockcache; so we were over-persisting wrtiting out our block plus the
> next blocks' header (overpersisting 33 bytes). Parse of HFileBlock
> complicated by this extra tail. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15349) Update surefire version to 2.19.1

2016-03-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203415#comment-15203415
 ] 

stack commented on HBASE-15349:
---

Pushed to master. Lets see if this related at all [~ashish singhi]

> Update surefire version to 2.19.1
> -
>
> Key: HBASE-15349
> URL: https://issues.apache.org/jira/browse/HBASE-15349
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15349.patch
>
>
> So that new properties like surefire.excludesFile and includesFile can be 
> used to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-15349) Update surefire version to 2.19.1

2016-03-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-15349:
---

Reopening. Let me revert to see if this has anything to do w/ the timeouts we 
are seeing frequently on hadoopqa.

> Update surefire version to 2.19.1
> -
>
> Key: HBASE-15349
> URL: https://issues.apache.org/jira/browse/HBASE-15349
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15349.patch
>
>
> So that new properties like surefire.excludesFile and includesFile can be 
> used to easily exclude/include flaky tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203392#comment-15203392
 ] 

Hudson commented on HBASE-15479:


ABORTED: Integrated in HBase-0.98-matrix #317 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/317/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev f39c530577b9ec81c14cd97654bdf9144dae3de3)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203369#comment-15203369
 ] 

Hudson commented on HBASE-15479:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1190 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1190/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev f39c530577b9ec81c14cd97654bdf9144dae3de3)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15488) Add ACL for setting split merge switch

2016-03-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203358#comment-15203358
 ] 

Anoop Sam John commented on HBASE-15488:


Why not a post hook also as we have with other APIs?
Calling bypass on pre hook have no impact?  I think it is better to allow to be 
consistent with other cases.
Do we need 2 sets of hook (for split and merge) rather than call it 
preswiitchOrMerge and pass the type?  Just asking. Or you wanted it to be same 
way as the API?  I don't know why we did not make 2 APIs. Any reasons?

> Add ACL for setting split merge switch
> --
>
> Key: HBASE-15488
> URL: https://issues.apache.org/jira/browse/HBASE-15488
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15488-branch-1.v1.patch, HBASE-15488.v1.patch
>
>
> Currently there is no access control for the split merge switch setter in 
> MasterRpcServices.
> This JIRA adds necessary coprocessor hook along with enforcing permission 
> check in AccessController through the new hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14920) Compacting Memstore

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203346#comment-15203346
 ] 

Hadoop QA commented on HBASE-14920:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 58s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 6s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 
unchanged - 4 fixed = 12 total (was 12) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 6s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 
unchanged - 4 fixed = 12 total (was 12) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s 
{color} | {color:red} hbase-common: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 59s 
{color} | {color:red} hbase-common: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 2s 
{color} | {color:red} hbase-client: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s 
{color} | {color:red} hbase-client: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 2s 
{color} | {color:red} hbase-server: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} hbase-server: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} hbase-shell: patch generated 3 new + 0 unchanged - 3 
fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 2s 

[jira] [Commented] (HBASE-14703) HTable.mutateRow does not collect stats

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201349#comment-15201349
 ] 

Hadoop QA commented on HBASE-14703:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
26s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 17 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 97m 26s 
{color} | {color:green} hbase-server in the patch passed. {color} |

[jira] [Commented] (HBASE-15360) Fix flaky TestSimpleRpcScheduler

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198131#comment-15198131
 ] 

Hudson commented on HBASE-15360:


FAILURE: Integrated in HBase-1.3 #604 (See 
[https://builds.apache.org/job/HBase-1.3/604/])
HBASE-15360 Fix flaky TestSimpleRpcScheduler (Duo Zhang) (stack: rev 
4b9495e807e6789a689a3c192603e7c3f0a23f1d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java


> Fix flaky TestSimpleRpcScheduler
> 
>
> Key: HBASE-15360
> URL: https://issues.apache.org/jira/browse/HBASE-15360
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15360.patch
>
>
> There were several flaky tests added there as part of HBASE-15306 and likely 
> HBASE-15136.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200696#comment-15200696
 ] 

Hudson commented on HBASE-15456:


SUCCESS: Integrated in HBase-1.3-IT #560 (See 
[https://builds.apache.org/job/HBase-1.3-IT/560/])
HBASE-15456 CreateTableProcedure/ModifyTableProcedure needs to fail when 
(tedyu: rev e4409c2f8c82f1b061f4dce2eb342c61999bf2e3)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestOpenRegionHandler.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestOpenedRegionHandler.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestModifyTableProcedure.java


> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, 
> HBASE-15456-branch-1.patch, HBASE-15456-branch-1.patch, 
> HBASE-15456-v001.patch, HBASE-15456-v002.patch, HBASE-15456-v002.patch, 
> HBASE-15456-v003.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15490) Remove duplicated CompactionThroughputControllerFactory in branch-1

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203321#comment-15203321
 ] 

Hudson commented on HBASE-15490:


SUCCESS: Integrated in HBase-1.3-IT #569 (See 
[https://builds.apache.org/job/HBase-1.3-IT/569/])
HBASE-15490 Remove duplicated CompactionThroughputControllerFactory in (liyu: 
rev 17815ded74d3789b7474969cd105e8bd31e8bbaa)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputControllerFactory.java


> Remove duplicated CompactionThroughputControllerFactory in branch-1
> ---
>
> Key: HBASE-15490
> URL: https://issues.apache.org/jira/browse/HBASE-15490
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0
>
> Attachments: HBASE-15490.branch-1.patch, HBASE-15490.patch
>
>
> Currently there're two {{CompactionThroughputControllerFactory}} in our 
> branch-1 code base (one in {{o.a.h.h.regionserver.compactions}} package, the 
> other in {{o.a.h.h.regionserver.throttle}}) and both are in use.
> This is a regression of HBASE-14969 and only exists in branch-1. We should 
> remove the one  in {{o.a.h.h.regionserver.compactions}}, and change the 
> default compaction throughput controller back to 
> {{NoLimitThroughputController}} to keep compatible with previous branch-1 
> version
> Thanks [~ghelmling] for pointing out the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15481) Add pre/post roll to WALObserver

2016-03-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201729#comment-15201729
 ] 

Sean Busbey commented on HBASE-15481:
-

filed YETUS-336 for the inconsistent javadoc results.

> Add pre/post roll to WALObserver
> 
>
> Key: HBASE-15481
> URL: https://issues.apache.org/jira/browse/HBASE-15481
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 1.3.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15481-v0.patch, HBASE-15481-v1.patch
>
>
> currently the WALObserver has only a pre/post Write. It will be useful to 
> have a pre/post Roll too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201396#comment-15201396
 ] 

Ted Yu commented on HBASE-15479:


hbase-server tests were not run.

Can you include some trivial change in hbase-server module so that hbase-server 
tests are run ?

> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2

2016-03-20 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14123:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14123-v1.patch, HBASE-14123-v10.patch, 
> HBASE-14123-v11.patch, HBASE-14123-v12.patch, HBASE-14123-v2.patch, 
> HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, 
> HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-03-20 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201475#comment-15201475
 ] 

Heng Chen commented on HBASE-15406:
---

If we use ephemeral node,  the lock will disappear after hbck abort,  and user 
can change the switch state by command before you rerun hbck.

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15228) Add the methods to RegionObserver to trigger start/complete restoring WALs

2016-03-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15228:

Status: Patch Available  (was: Open)

> Add the methods to RegionObserver to trigger start/complete restoring WALs
> --
>
> Key: HBASE-15228
> URL: https://issues.apache.org/jira/browse/HBASE-15228
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-15228-v1.patch, HBASE-15228.patch
>
>
> In our use case, we write indexes to a index table when restoring WALs. We 
> want to trigger start/complete restoring WALs for initial and end processing 
> for writing indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15490) Remove duplicated CompactionThroughputControllerFactory in branch-1

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203309#comment-15203309
 ] 

Hudson commented on HBASE-15490:


SUCCESS: Integrated in HBase-1.4 #37 (See 
[https://builds.apache.org/job/HBase-1.4/37/])
HBASE-15490 Remove duplicated CompactionThroughputControllerFactory in (liyu: 
rev 17815ded74d3789b7474969cd105e8bd31e8bbaa)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionThroughputControllerFactory.java


> Remove duplicated CompactionThroughputControllerFactory in branch-1
> ---
>
> Key: HBASE-15490
> URL: https://issues.apache.org/jira/browse/HBASE-15490
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0
>
> Attachments: HBASE-15490.branch-1.patch, HBASE-15490.patch
>
>
> Currently there're two {{CompactionThroughputControllerFactory}} in our 
> branch-1 code base (one in {{o.a.h.h.regionserver.compactions}} package, the 
> other in {{o.a.h.h.regionserver.throttle}}) and both are in use.
> This is a regression of HBASE-14969 and only exists in branch-1. We should 
> remove the one  in {{o.a.h.h.regionserver.compactions}}, and change the 
> default compaction throughput controller back to 
> {{NoLimitThroughputController}} to keep compatible with previous branch-1 
> version
> Thanks [~ghelmling] for pointing out the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12940) Expose listPeerConfigs and getPeerConfig to the HBase shell

2016-03-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198391#comment-15198391
 ] 

Ted Yu commented on HBASE-12940:


I was searching for individual test case names to see if they were run.
Have you noticed the following in test output ?
{code}
2016-03-16 15:22:40,098 INFO  
[RpcServer.reader=0,bindAddress=10.22.16.220,port=61976] 
ipc.RpcServer$Connection(1740): Connection from 10.22.16.220 port: 62069 with 
version  info: version: "2.0.0-SNAPSHOT" url: 
"git://TYus-MacBook-Pro.local/Users/tyu/trunk" revision: 
"3bf0945a1149e518a49d14d4cc930383a4f311da" user: "tyu" date: "Wed Mar 16 
15:21:57   PDT 2016" src_checksum: "fed529d9fec612f1276ebb401ea46a06" 
version_major: 2 version_minor: 0
2016-03-16 15:22:40,107 ERROR [main-EventThread] 
regionserver.ReplicationSourceManager(596): Error while adding a new peer
org.apache.hadoop.hbase.replication.ReplicationException: Error adding peer 
with id=1
  at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:425)
  at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.peerAdded(ReplicationPeersZKImpl.java:397)
  at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.peerListChanged(ReplicationSourceManager.java:591)
  at 
org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$PeersWatcher.nodeChildrenChanged(ReplicationTrackerZKImpl.java:187)
  at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:628)
  at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error 
starting the peer state tracker for peerId=1
  at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:493)
  at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:423)
  ... 6 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /hbase/replication/peers/1/peer-state
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
  at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
  at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:575)
  at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:554)
  at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.createNodeIfNotExistsAndWatch(ZKUtil.java:1009)
  at 
org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.ensurePeerEnabled(ReplicationPeerZKImpl.java:238)
  at 
org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.startStateTracker(ReplicationPeerZKImpl.java:96)
{code}

> Expose listPeerConfigs and getPeerConfig to the HBase shell
> ---
>
> Key: HBASE-12940
> URL: https://issues.apache.org/jira/browse/HBASE-12940
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Kevin Risden
>Assignee: Geoffrey Jacoby
> Attachments: HBASE-12940-v1.patch, HBASE-12940.patch
>
>
> In HBASE-12867 found that listPeerConfigs and getPeerConfig from 
> ReplicationAdmin are not exposed to the HBase shell. This makes looking at 
> details for custom replication endpoints and testing of add_peer from 
> HBASE-12867 impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15456.

Resolution: Fixed

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, 
> HBASE-15456-branch-1.patch, HBASE-15456-branch-1.patch, 
> HBASE-15456-branch-1_v002.patch, HBASE-15456-v001.patch, 
> HBASE-15456-v002.patch, HBASE-15456-v002.patch, HBASE-15456-v003.patch, 
> HBASE-15456-v004.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15441) Fix WAL splitting when region has moved multiple times

2016-03-20 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198389#comment-15198389
 ] 

Gary Helmling commented on HBASE-15441:
---

+1 on v5

> Fix WAL splitting when region has moved multiple times
> --
>
> Key: HBASE-15441
> URL: https://issues.apache.org/jira/browse/HBASE-15441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15441-v1.patch, HBASE-15441-v2.patch, 
> HBASE-15441-v3.patch, HBASE-15441-v4.patch, HBASE-15441-v5.patch, 
> HBASE-15441.patch
>
>
> Currently WAL splitting is broken when a region has been opened multiple 
> times in recent minutes.
> Region open and region close write event markers to the wal. These markers 
> should have the sequence id in them. However it is currently getting 1. That 
> means that if a region has moved multiple times in the last few mins then 
> multiple split log workers will try and create the recovered edits file for 
> sequence id 1. One of the workers will fail and on failing they will delete 
> the recovered edits. Causing all split wal attempts to fail.
> We need to:
> # make sure that close get the correct sequence id for open.
> # Filter all region events from recovered edits
> It appears that the close event with a sequence id of one is coming from 
> region warm up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15295) MutateTableAccess.multiMutate() does not get high priority causing a deadlock

2016-03-20 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201698#comment-15201698
 ] 

Stephen Yuan Jiang commented on HBASE-15295:


+1.  LGTM.  Even the patch is big, but the logic is very straightforward and 
clear.  The change looks good to me.

> MutateTableAccess.multiMutate() does not get high priority causing a deadlock
> -
>
> Key: HBASE-15295
> URL: https://issues.apache.org/jira/browse/HBASE-15295
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.5
>
> Attachments: hbase-15295_v1.patch, hbase-15295_v1.patch, 
> hbase-15295_v2.patch, hbase-15295_v3.patch, hbase-15295_v4.patch, 
> hbase-15295_v5.patch, hbase-15295_v5.patch
>
>
> We have seen this in a cluster with Phoenix secondary indexes leading to a 
> deadlock. All handlers are busy waiting on the index updates to finish:
> {code}
> "B.defaultRpcServer.handler=50,queue=0,port=16020" #91 daemon prio=5 
> os_prio=0 tid=0x7f29f64ba000 nid=0xab51 waiting on condition 
> [0x7f29a8762000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000124f1d5c8> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:275)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:194)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:179)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:134)
>   at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:457)
>   at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:406)
>   at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutate(Indexer.java:401)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$36.call(RegionCoprocessorHost.java:1006)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutate(RegionCoprocessorHost.java:1002)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3162)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> And the index region is trying to split, and is trying to do a meta update: 
> {code}
> "regionserver//10.132.70.191:16020-splits-1454693389669" #1779 
> prio=5 os_prio=0 

[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200706#comment-15200706
 ] 

Ted Yu commented on HBASE-15456:


[~huaxiang]:
Looks like there were some thrift tests which should have been modified - even 
for master branch.

Can you run tests in modules other than hbase-server so that we get this right ?

Thanks

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, 
> HBASE-15456-branch-1.patch, HBASE-15456-branch-1.patch, 
> HBASE-15456-v001.patch, HBASE-15456-v002.patch, HBASE-15456-v002.patch, 
> HBASE-15456-v003.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15047) Try spin lock for MVCC completion

2016-03-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15047:
--
Status: Open  (was: Patch Available)

> Try spin lock for MVCC completion
> -
>
> Key: HBASE-15047
> URL: https://issues.apache.org/jira/browse/HBASE-15047
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15047.patch
>
>
> Waits/Notify is very very expensive since it can cost a thread scheduling. 
> There should only ever be a few threads ( < Num Cores ) running. So it should 
> be possible to spin and use compare and swap to update the read point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14983:
--
Attachment: HBASE-14983-v7.patch

Retry

> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14983-v1.patch, HBASE-14983-v2.patch, 
> HBASE-14983-v3.patch, HBASE-14983-v4.patch, HBASE-14983-v5.patch, 
> HBASE-14983-v6.patch, HBASE-14983-v7.patch, HBASE-14983.patch, Screen Shot 
> 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14985) TimeRange constructors should set allTime when appropriate

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201130#comment-15201130
 ] 

Hadoop QA commented on HBASE-14985:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 30s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 25s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 256m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay |
|   | 

[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2016-03-20 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203266#comment-15203266
 ] 

Eshcar Hillel commented on HBASE-14918:
---

New patch is attached to task HBASE-14920 - new compacting memstore 
implementation. The patch is not small ;) please review.

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: CellBlocksSegmentDesign.pdf, MSLABMove.patch
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14920) Compacting Memstore

2016-03-20 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-14920:
--
Status: Patch Available  (was: Open)

> Compacting Memstore
> ---
>
> Key: HBASE-14920
> URL: https://issues.apache.org/jira/browse/HBASE-14920
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14920-V01.patch
>
>
> Implementation of a new compacting memstore with non-optimized immutable 
> segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14920) Compacting Memstore

2016-03-20 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-14920:
--
Attachment: HBASE-14920-V01.patch

> Compacting Memstore
> ---
>
> Key: HBASE-14920
> URL: https://issues.apache.org/jira/browse/HBASE-14920
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14920-V01.patch
>
>
> Implementation of a new compacting memstore with non-optimized immutable 
> segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15441) Fix WAL splitting when region has moved multiple times

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198800#comment-15198800
 ] 

Hudson commented on HBASE-15441:


FAILURE: Integrated in HBase-Trunk_matrix #782 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/782/])
HBASE-15441 Fix WAL splitting when region has moved multiple times (eclark: rev 
ecec35ae4e6867da3b42d674f6eccbe8a9f7d533)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Fix WAL splitting when region has moved multiple times
> --
>
> Key: HBASE-15441
> URL: https://issues.apache.org/jira/browse/HBASE-15441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.4.0
>
> Attachments: HBASE-15441-v1.patch, HBASE-15441-v2.patch, 
> HBASE-15441-v3.patch, HBASE-15441-v4.patch, HBASE-15441-v5.patch, 
> HBASE-15441.patch
>
>
> Currently WAL splitting is broken when a region has been opened multiple 
> times in recent minutes.
> Region open and region close write event markers to the wal. These markers 
> should have the sequence id in them. However it is currently getting 1. That 
> means that if a region has moved multiple times in the last few mins then 
> multiple split log workers will try and create the recovered edits file for 
> sequence id 1. One of the workers will fail and on failing they will delete 
> the recovered edits. Causing all split wal attempts to fail.
> We need to:
> # make sure that close get the correct sequence id for open.
> # Filter all region events from recovered edits
> It appears that the close event with a sequence id of one is coming from 
> region warm up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14138) HBase Backup/Restore Phase 3: Security

2016-03-20 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200113#comment-15200113
 ] 

Vladimir Rodionov commented on HBASE-14138:
---

Moved to Phase 3.

> HBase Backup/Restore Phase 3: Security
> --
>
> Key: HBASE-14138
> URL: https://issues.apache.org/jira/browse/HBASE-14138
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>
> Security is not supported. Only authorized user (GLOBAL ADMIN) must be 
> allowed to perform backup/restore. See: HBASE-7367 for good discussion on 
> snapshot security model. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15436) BufferedMutatorImpl.flush() appears to get stuck

2016-03-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197343#comment-15197343
 ] 

Naganarasimha G R commented on HBASE-15436:
---

Valid point let me discuss on this more with ATS team...

> BufferedMutatorImpl.flush() appears to get stuck
> 
>
> Key: HBASE-15436
> URL: https://issues.apache.org/jira/browse/HBASE-15436
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.2
>Reporter: Sangjin Lee
> Attachments: hbaseException.log, threaddump.log
>
>
> We noticed an instance where the thread that was executing a flush 
> ({{BufferedMutatorImpl.flush()}}) got stuck when the (local one-node) cluster 
> shut down and was unable to get out of that stuck state.
> The setup is a single node HBase cluster, and apparently the cluster went 
> away when the client was executing flush. The flush eventually logged a 
> failure after 30+ minutes of retrying. That is understandable.
> What is unexpected is that thread is stuck in this state (i.e. in the 
> {{flush()}} call). I would have expected the {{flush()}} call to return after 
> the complete failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15475) Allow TimestampsFilter to provide a seek hint

2016-03-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199152#comment-15199152
 ] 

Elliott Clark commented on HBASE-15475:
---

Rebased so that the CI can pick up test fixes from HBASE-15390

> Allow TimestampsFilter to provide a seek hint
> -
>
> Key: HBASE-15475
> URL: https://issues.apache.org/jira/browse/HBASE-15475
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Filters, regionserver
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15475-v1.patch, HBASE-15475-v2.patch, 
> HBASE-15475-v3.patch, HBASE-15475-v4.patch, HBASE-15475.patch
>
>
> If a user wants to read specific timestamps currently it's a very linear 
> scan. This is so that all deletes can be respected. However if the user 
> doesn't care about deletes it can dramatically speed up the request to seek.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203228#comment-15203228
 ] 

Hudson commented on HBASE-15479:


SUCCESS: Integrated in HBase-1.2 #581 (See 
[https://builds.apache.org/job/HBase-1.2/581/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev f8fd7d1f2c65f391e20195c7cb57d30a3f091c94)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-03-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201402#comment-15201402
 ] 

Ted Yu commented on HBASE-15406:


bq. it will cause conflicts when two hbck run at the same time. 

There is lock mechanism in place to prevent two hbck instances from running at 
the same time. You don't need to add extra code covering this.

Since the key can be ignored, it is not that useful. You can remove the code 
related to key.

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15411:
---
Attachment: 15411-v14.txt

Patch v14 simplifies restore related tests by dropping HBackupFileSystem since 
HBackupFileSystem is no longer needed in client API.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 
> 15411-v7.txt, 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15469) Take snapshot by family

2016-03-20 Thread Jianwei Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianwei Cui updated HBASE-15469:

Attachment: HBASE-15469-v1.patch

> Take snapshot by family
> ---
>
> Key: HBASE-15469
> URL: https://issues.apache.org/jira/browse/HBASE-15469
> Project: HBase
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
> Attachments: HBASE-15469-v1.patch
>
>
> In our production environment, there are some 'wide' tables in offline 
> cluster. The 'wide' table has a number of families, different applications 
> will access different families of the table through MapReduce. When some 
> application starting to provide online service, we need to copy needed 
> families from offline cluster to online cluster. For future write, the 
> inter-cluster replication supports setting families for table, we can use it 
> to copy future edits for needed families. For existed data, we can take 
> snapshot of the table on offline cluster, then exploit {{ExportSnapshot}} to 
> copy snapshot to online cluster and clone the snapshot. However, we can only 
> take snapshot for the whole table in which many families are not needed for 
> the application, this will lead unnecessary data copy. I think it is useful 
> to support taking snapshot by family, so that we can only copy needed data.
> Possible solution to support such function:
> 1. Add family names field to the protobuf definition of 
> {{SnapshotDescription}}
> 2. Allow to set families when taking snapshot in hbase shell, such as:
> {code}
>snapshot 'tableName', 'snapshotName', 'FamilyA', 'FamilyB', {SKIP_FLUSH => 
> true}
> {code}
> 3. Add family names to {{SnapshotDescription}} in client side
> 4. Read family names from {{SnapshotDescription}} in Master/Regionserver, 
> keep only requested families when taking snapshot for region.
> Discussions and suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15456:
-
Attachment: HBASE-15456-001_branch-1.patch

Patch for branch-1

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, HBASE-15456-v001.patch, 
> HBASE-15456-v002.patch, HBASE-15456-v002.patch, HBASE-15456-v003.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203217#comment-15203217
 ] 

Hudson commented on HBASE-15479:


FAILURE: Integrated in HBase-1.1-JDK7 #1684 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1684/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev d5bf8c86d21fb24c17fbc8ad72f4a5c9ccd034ba)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15487) Deletions done via BulkDeleteEndpoint make past data re-appear

2016-03-20 Thread Mathias Herberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203202#comment-15203202
 ] 

Mathias Herberts commented on HBASE-15487:
--

By setting setRaw(true) the deletion works as expected when using 
BulkDeleteEndpoint, but this is not the case when using regular deletes.

> Deletions done via BulkDeleteEndpoint make past data re-appear
> --
>
> Key: HBASE-15487
> URL: https://issues.apache.org/jira/browse/HBASE-15487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Mathias Herberts
> Attachments: HBaseTest.java, HBaseTest.java
>
>
> The Warp10 (www.warp10.io) time series database uses HBase as its underlying 
> data store. The deletion of ranges of cells is performed using the 
> BulkDeleteEndpoint.
> In the following scenario the deletion does not appear to be working properly:
> The table 't' is created with a single version using:
> create 't', {NAME => 'v', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS=> '1', MIN_VERSIONS => '0', TTL => 
> '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY 
> =>'false', BLOCKCACHE => 'true'}
> We write a cell at row '0x00', colfam 'v', colq '', value 0x0
> We write the same cell again with value 0x1
> A scan will return a single value 0x1
> We then perform a delete using the BulkDeleteEndpoint and a Scan with a 
> DeleteType of 'VERSION'
> The reported number of deleted versions is 1 (which is coherent given the 
> table was created with MAX_VERSIONS=1)
> The same scan as the one performed before the delete returns a single value 
> 0x0.
> This seems to happen when all operations are performed against the memstore.
> A regular delete will remove the cell and a later scan won't show it.
> I'll attach a test which demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12940) Expose listPeerConfigs and getPeerConfig to the HBase shell

2016-03-20 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198414#comment-15198414
 ] 

Geoffrey Jacoby commented on HBASE-12940:
-

Yes, I noticed those, and they concerned me at first. On a deeper look though, 
these appear to be coming from HBase trying to actually set up replication with 
remote clusters that aren't actually there. ReplicationPeersZKImpl assumes that 
the remote ZooKeeper quorum it's given as the cluster key is a real thing that 
it can immediately go talk to. But none of the quorums mentioned in the tests 
exist, so the watcher threads on the minicluster are unhappy. 

But that doesn't stop us from verifying that the replication_admin wrapper and 
ReplicationAdmin class are successfully putting data into _our_ minicluster ZK 
instance, and able to get it back out again. 

There are also some exceptions related to an unneeded table the existing test 
setup creates, and I filed a separate JIRA, HBASE-15472, to take care of that 
one. 


> Expose listPeerConfigs and getPeerConfig to the HBase shell
> ---
>
> Key: HBASE-12940
> URL: https://issues.apache.org/jira/browse/HBASE-12940
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Kevin Risden
>Assignee: Geoffrey Jacoby
> Attachments: HBASE-12940-v1.patch, HBASE-12940.patch
>
>
> In HBASE-12867 found that listPeerConfigs and getPeerConfig from 
> ReplicationAdmin are not exposed to the HBase shell. This makes looking at 
> details for custom replication endpoints and testing of add_peer from 
> HBASE-12867 impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14856) document region servers groups in the book

2016-03-20 Thread Francis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-14856:

Issue Type: Task  (was: Sub-task)
Parent: (was: HBASE-6721)

> document region servers groups in the book
> --
>
> Key: HBASE-14856
> URL: https://issues.apache.org/jira/browse/HBASE-14856
> Project: HBase
>  Issue Type: Task
>Reporter: Francis Liu
>Assignee: Francis Liu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15398) Cells loss or disorder when using family essential filter and partial scanning protocol

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201774#comment-15201774
 ] 

Hadoop QA commented on HBASE-15398:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 40s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 54s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 130m 19s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
51s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 305m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
|   | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | 

[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201412#comment-15201412
 ] 

Ted Yu commented on HBASE-15456:


Integrated to branch-1 after running TestThriftServer locally.

If the build is green, will integrate to other branches.

> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, 
> HBASE-15456-branch-1.patch, HBASE-15456-branch-1.patch, 
> HBASE-15456-branch-1_v002.patch, HBASE-15456-v001.patch, 
> HBASE-15456-v002.patch, HBASE-15456-v002.patch, HBASE-15456-v003.patch, 
> HBASE-15456-v004.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15325) ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests

2016-03-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15325:
--
Attachment: HBASE-15325-v11.patch

retry QA

> ResultScanner allowing partial result will miss the rest of the row if the 
> region is moved between two rpc requests
> ---
>
> Key: HBASE-15325
> URL: https://issues.apache.org/jira/browse/HBASE-15325
> Project: HBase
>  Issue Type: Bug
>  Components: dataloss, Scanners
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4, 1.4.0
>
> Attachments: 15325-test.txt, HBASE-15325-v1.txt, 
> HBASE-15325-v10.patch, HBASE-15325-v11.patch, HBASE-15325-v2.txt, 
> HBASE-15325-v3.txt, HBASE-15325-v5.txt, HBASE-15325-v6.1.txt, 
> HBASE-15325-v6.2.txt, HBASE-15325-v6.3.txt, HBASE-15325-v6.4.txt, 
> HBASE-15325-v6.5.txt, HBASE-15325-v6.txt, HBASE-15325-v7.patch, 
> HBASE-15325-v8.patch, HBASE-15325-v9.patch
>
>
> HBASE-11544 allow scan rpc return partial of a row to reduce memory usage for 
> one rpc request. And client can setAllowPartial or setBatch to get several 
> cells in a row instead of the whole row.
> However, the status of the scanner is saved on server and we need this to get 
> the next part if there is a partial result before. If we move the region to 
> another RS, client will get a NotServingRegionException and open a new 
> scanner to the new RS which will be regarded as a new scan from the end of 
> this row. So the rest cells of the row of last result will be missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15475) Allow TimestampsFilter to provide a seek hint

2016-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199655#comment-15199655
 ] 

Hadoop QA commented on HBASE-15475:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 56s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
34s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 59s {color} 
| {color:red} hbase-server-jdk1.8.0 with JDK v1.8.0 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 59s {color} 
| {color:red} hbase-server-jdk1.8.0 with JDK v1.8.0 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 11s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 11s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hbase-protocol: patch generated 0 new + 0 unchanged 
- 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} hbase-protocol: patch generated 0 new + 0 unchanged 
- 3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hbase-client: patch generated 0 new + 0 unchanged - 
3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} hbase-client: patch generated 0 new + 0 unchanged - 
3 fixed = 0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | 

[jira] [Commented] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-20 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200137#comment-15200137
 ] 

Vladimir Rodionov commented on HBASE-15411:
---

Moved to Phase 1.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v15.txt, 15411-v3.txt, 15411-v5.txt, 
> 15411-v6.txt, 15411-v7.txt, 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15064) BufferUnderflowException after last Cell fetched from an HFile Block served from L2 offheap cache

2016-03-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198769#comment-15198769
 ] 

ramkrishna.s.vasudevan commented on HBASE-15064:


bq.because if somebody is trying to limit at the place which is exactly at the 
boundary of the limitIndexBuffer then we are also including the last item which 
does not have any data as you are limiting at 0 (as limit == limitedIndexBegin, 
which is at the boundary), But then once you have read everything in the 
previous buffer if the client consults hasRemaining function this will return 
again true (as curIterm < no_of_items in array) but when you actually try to 
read anything we will throw BufferUnderFlowException because again the last 
element has no data. 
This is again true. In case you see MBB as a generic API what you say is true

> BufferUnderflowException after last Cell fetched from an HFile Block served 
> from L2 offheap cache
> -
>
> Key: HBASE-15064
> URL: https://issues.apache.org/jira/browse/HBASE-15064
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.0
>Reporter: deepankar
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15064.patch
>
>
> While running the newer patches on our production system, I saw this error 
> come couple of times 
> {noformat}
> ipc.RpcServer: Unexpected throwable object 
> 2016-01-01 16:42:56,090 ERROR 
> [B.defaultRpcServer.handler=20,queue=20,port=60020] ipc.RpcServer: Unexpected 
> throwable object 
> java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:500)
> at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:249)
> at org.apache.hadoop.hbase.nio.MultiByteBuff.get(MultiByteBuff.java:494)
> at 
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:402)
>  
> at 
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:517)
>  
> at 
> org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:815)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:138)
> {noformat}
> Looking at the get code 
> {code}
> if (this.curItem.remaining() == 0) {
>   if (items.length - 1 == this.curItemIndex) {
> // means cur item is the last one and we wont be able to read a long. 
> Throw exception
> throw new BufferUnderflowException();
>   }
>   this.curItemIndex++;
>   this.curItem = this.items[this.curItemIndex];
> }
> return this.curItem.get();
> {code}
> Can the new currentItem have zero elements (position == limit), does it make 
> sense to change the {{if}} to {{while}} ? {{while (this.curItem.remaining() 
> == 0)}}. This logic is repeated may make sense abstract to a new function if 
> we plan to change to  {{if}} to {{while}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15490) Remove duplicated CompactionThroughputControllerFactory in branch-1

2016-03-20 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15490:
--
   Resolution: Fixed
Fix Version/s: 1.3.0
   Status: Resolved  (was: Patch Available)

Pushed into branch-1. Thanks [~tedyu] for review.

> Remove duplicated CompactionThroughputControllerFactory in branch-1
> ---
>
> Key: HBASE-15490
> URL: https://issues.apache.org/jira/browse/HBASE-15490
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0
>
> Attachments: HBASE-15490.branch-1.patch, HBASE-15490.patch
>
>
> Currently there're two {{CompactionThroughputControllerFactory}} in our 
> branch-1 code base (one in {{o.a.h.h.regionserver.compactions}} package, the 
> other in {{o.a.h.h.regionserver.throttle}}) and both are in use.
> This is a regression of HBASE-14969 and only exists in branch-1. We should 
> remove the one  in {{o.a.h.h.regionserver.compactions}}, and change the 
> default compaction throughput controller back to 
> {{NoLimitThroughputController}} to keep compatible with previous branch-1 
> version
> Thanks [~ghelmling] for pointing out the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15490) Remove duplicated CompactionThroughputControllerFactory in branch-1

2016-03-20 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15490:
--
Summary: Remove duplicated CompactionThroughputControllerFactory in 
branch-1  (was: Two CompactionThroughputControllerFactory co-exist in branch-1)

> Remove duplicated CompactionThroughputControllerFactory in branch-1
> ---
>
> Key: HBASE-15490
> URL: https://issues.apache.org/jira/browse/HBASE-15490
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-15490.branch-1.patch, HBASE-15490.patch
>
>
> Currently there're two {{CompactionThroughputControllerFactory}} in our 
> branch-1 code base (one in {{o.a.h.h.regionserver.compactions}} package, the 
> other in {{o.a.h.h.regionserver.throttle}}) and both are in use.
> This is a regression of HBASE-14969 and only exists in branch-1. We should 
> remove the one  in {{o.a.h.h.regionserver.compactions}}, and change the 
> default compaction throughput controller back to 
> {{NoLimitThroughputController}} to keep compatible with previous branch-1 
> version
> Thanks [~ghelmling] for pointing out the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15483) After disabling Authorization, user should not be allowed to modify ACL record

2016-03-20 Thread meiwen li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

meiwen li updated HBASE-15483:
--
Issue Type: Improvement  (was: Bug)

> After disabling Authorization, user should not be allowed to modify ACL 
> record 
> ---
>
> Key: HBASE-15483
> URL: https://issues.apache.org/jira/browse/HBASE-15483
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: meiwen li
>
> After setting hbase.security.authorization to be false, hbase does NOT do 
> authority check for any operations by any users. Thus, any user, including 
> read only user, has the authority to grant  . The 
> change to ACL record is lasted and will take effective after next 
> authorization enabling. 
> The conseqence is,
> A readonly user can change an admin user to be a "readonly" user after a 
> round of "disable authorization" and "enable authorization"
> Also,
> A readonly user can change a "readonly" user to be an Admin after such a 
> round of disable/enable.
> It is expected that 
> after authorization is disabled, the authorization related file, the ACL 
> record, should not be open to users and not be changed. Otherwise, after the 
> authorization next enablement, the changed ACL takes action and users get 
> unexpected authority.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15483) After disabling Authorization, user should not be allowed to modify ACL record

2016-03-20 Thread meiwen li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203150#comment-15203150
 ] 

meiwen li commented on HBASE-15483:
---

Thank you. I read the release notes and understand  current implementation. 
However, I feel this a little weird and am afraid this might not what users 
expect.  

It look like you have plan to improve this?

> After disabling Authorization, user should not be allowed to modify ACL 
> record 
> ---
>
> Key: HBASE-15483
> URL: https://issues.apache.org/jira/browse/HBASE-15483
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: meiwen li
>
> After setting hbase.security.authorization to be false, hbase does NOT do 
> authority check for any operations by any users. Thus, any user, including 
> read only user, has the authority to grant  . The 
> change to ACL record is lasted and will take effective after next 
> authorization enabling. 
> The conseqence is,
> A readonly user can change an admin user to be a "readonly" user after a 
> round of "disable authorization" and "enable authorization"
> Also,
> A readonly user can change a "readonly" user to be an Admin after such a 
> round of disable/enable.
> It is expected that 
> after authorization is disabled, the authorization related file, the ACL 
> record, should not be open to users and not be changed. Otherwise, after the 
> authorization next enablement, the changed ACL takes action and users get 
> unexpected authority.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203144#comment-15203144
 ] 

Hudson commented on HBASE-15479:


SUCCESS: Integrated in HBase-1.2-IT #466 (See 
[https://builds.apache.org/job/HBase-1.2-IT/466/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev f8fd7d1f2c65f391e20195c7cb57d30a3f091c94)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203142#comment-15203142
 ] 

Hudson commented on HBASE-15479:


FAILURE: Integrated in HBase-1.1-JDK8 #1770 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1770/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev d5bf8c86d21fb24c17fbc8ad72f4a5c9ccd034ba)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-20 Thread Phil Yang (JIRA)
Phil Yang created HBASE-15485:
-

 Summary: Filter.reset() should not be called between batches
 Key: HBASE-15485
 URL: https://issues.apache.org/jira/browse/HBASE-15485
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.3, 1.2.0
Reporter: Phil Yang
Assignee: Phil Yang


As discussed in HBASE-15325, now we will resetFilters if partial result not 
formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15481) Add pre/post roll to WALObserver

2016-03-20 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-15481:
---

 Summary: Add pre/post roll to WALObserver
 Key: HBASE-15481
 URL: https://issues.apache.org/jira/browse/HBASE-15481
 Project: HBase
  Issue Type: New Feature
Affects Versions: 1.3.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0, 1.3.0
 Attachments: HBASE-15481-v0.patch

currently the WALObserver has only a pre/post Write. It will be useful to have 
a pre/post Roll too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15456) CreateTableProcedure/ModifyTableProcedure needs to fail when there is no family in table descriptor

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202233#comment-15202233
 ] 

Hudson commented on HBASE-15456:


FAILURE: Integrated in HBase-Trunk_matrix #788 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/788/])
HBASE-15456 CreateTableProcedure/ModifyTableProcedure needs to fail when 
(tedyu: rev 3a6d683d63089c1986e68e531939df7328e58300)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.java
* 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestModifyTableProcedure.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionMover.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> CreateTableProcedure/ModifyTableProcedure needs to fail when there is no 
> family in table descriptor
> ---
>
> Key: HBASE-15456
> URL: https://issues.apache.org/jira/browse/HBASE-15456
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15456-001_branch-1.patch, 
> HBASE-15456-branch-1.patch, HBASE-15456-branch-1.patch, 
> HBASE-15456-branch-1_v002.patch, HBASE-15456-v001.patch, 
> HBASE-15456-v002.patch, HBASE-15456-v002.patch, HBASE-15456-v003.patch, 
> HBASE-15456-v004.patch
>
>
> If there is only one family in the table, DeleteColumnFamilyProcedure will 
> fail. 
> Currently, when hbase.table.sanity.checks is set to false, hbase master logs 
> a warning and CreateTableProcedure/ModifyTableProcedure will succeed. 
> This behavior is not consistent with DeleteColumnFamilyProcedure's. 
> Another point, before HBASE-13145, PeriodicMemstoreFlusher will run into the 
> following exception. lastStoreFlushTimeMap is populated for families, if 
> there is no family in the table, there is no entry in lastStoreFlushTimeMap.
> {code}
> 16/02/01 11:14:26 ERROR regionserver.HRegionServer$PeriodicMemstoreFlusher: 
> Caught exception 
> java.util.NoSuchElementException 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at java.util.Collections.min(Collections.java:628) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getEarliestFlushTimeForAllStores(HRegion.java:1572)
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.shouldFlush(HRegion.java:1904) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$PeriodicMemstoreFlusher.chore(HRegionServer.java:1509)
>  
> at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203131#comment-15203131
 ] 

Hudson commented on HBASE-15479:


SUCCESS: Integrated in HBase-1.3 #612 (See 
[https://builds.apache.org/job/HBase-1.3/612/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev bfd1776c640838c8f3b45cbb8e1259c49e0418d5)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15488) Add ACL for setting split merge switch

2016-03-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15488:
---
Attachment: HBASE-15488.v1.patch

> Add ACL for setting split merge switch
> --
>
> Key: HBASE-15488
> URL: https://issues.apache.org/jira/browse/HBASE-15488
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15488.v1.patch
>
>
> Currently there is no access control for the split merge switch setter in 
> MasterRpcServices.
> This JIRA adds necessary coprocessor hook along with enforcing permission 
> check in AccessController through the new hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15064) BufferUnderflowException after last Cell fetched from an HFile Block served from L2 offheap cache

2016-03-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198849#comment-15198849
 ] 

ramkrishna.s.vasudevan commented on HBASE-15064:


bqLet us revisit all logic.. Add one more check for corner case again and again,
Okie to revisit all logic. But this fix is only for limit() and hasRemaining I 
think.

> BufferUnderflowException after last Cell fetched from an HFile Block served 
> from L2 offheap cache
> -
>
> Key: HBASE-15064
> URL: https://issues.apache.org/jira/browse/HBASE-15064
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.0
>Reporter: deepankar
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15064.patch, MBB_hasRemaining.patch
>
>
> While running the newer patches on our production system, I saw this error 
> come couple of times 
> {noformat}
> ipc.RpcServer: Unexpected throwable object 
> 2016-01-01 16:42:56,090 ERROR 
> [B.defaultRpcServer.handler=20,queue=20,port=60020] ipc.RpcServer: Unexpected 
> throwable object 
> java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:500)
> at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:249)
> at org.apache.hadoop.hbase.nio.MultiByteBuff.get(MultiByteBuff.java:494)
> at 
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:402)
>  
> at 
> org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:517)
>  
> at 
> org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:815)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:138)
> {noformat}
> Looking at the get code 
> {code}
> if (this.curItem.remaining() == 0) {
>   if (items.length - 1 == this.curItemIndex) {
> // means cur item is the last one and we wont be able to read a long. 
> Throw exception
> throw new BufferUnderflowException();
>   }
>   this.curItemIndex++;
>   this.curItem = this.items[this.curItemIndex];
> }
> return this.curItem.get();
> {code}
> Can the new currentItem have zero elements (position == limit), does it make 
> sense to change the {{if}} to {{while}} ? {{while (this.curItem.remaining() 
> == 0)}}. This logic is repeated may make sense abstract to a new function if 
> we plan to change to  {{if}} to {{while}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15479) No more garbage or beware of autoboxing

2016-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15203126#comment-15203126
 ] 

Hudson commented on HBASE-15479:


FAILURE: Integrated in HBase-1.4 #36 (See 
[https://builds.apache.org/job/HBase-1.4/36/])
HBASE-15479 No more garbage or beware of autoboxing (Vladimir Rodionov) (tedyu: 
rev 050c13e83f8483c5a44d543a842eb51e35644a9f)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> No more garbage or beware of autoboxing
> ---
>
> Key: HBASE-15479
> URL: https://issues.apache.org/jira/browse/HBASE-15479
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.19, 1.4.0, 1.1.5
>
> Attachments: HBASE-15479-v1.patch, HBASE-15479-v2.patch
>
>
> Quick journey with JMC in a profile mode revealed very interesting and 
> unexpected heap polluter on a client side. Patch will shortly follow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15471) Add num calls in priority and general queue to RS UI

2016-03-20 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15471:
-

 Summary: Add num calls in priority and general queue to RS UI
 Key: HBASE-15471
 URL: https://issues.apache.org/jira/browse/HBASE-15471
 Project: HBase
  Issue Type: Improvement
  Components: UI
Affects Versions: 1.2.0
Reporter: Elliott Clark
 Fix For: 1.3.0


1.2 added the queue size. We should add the number of calls in the queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15389) Write out multiple files when compaction

2016-03-20 Thread Clara Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong reassigned HBASE-15389:
---

Assignee: Clara Xiong  (was: Duo Zhang)

> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.19
>Reporter: Duo Zhang
>Assignee: Clara Xiong
> Fix For: 2.0.0, 1.3.0, 0.98.19
>
> Attachments: HBASE-15389-0.98.patch, HBASE-15389-uc.patch, 
> HBASE-15389-v1.patch, HBASE-15389-v10.patch, HBASE-15389-v11.patch, 
> HBASE-15389-v12.patch, HBASE-15389-v2.patch, HBASE-15389-v3.patch, 
> HBASE-15389-v4.patch, HBASE-15389-v5.patch, HBASE-15389-v6.patch, 
> HBASE-15389-v7.patch, HBASE-15389-v8.patch, HBASE-15389-v9.patch, 
> HBASE-15389.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15360) Fix flaky TestSimpleRpcScheduler

2016-03-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197312#comment-15197312
 ] 

stack commented on HBASE-15360:
---

Lets commit. Our tests are in a sorry state while this is failing and 
preventing the run of later tests. Why non interruptible wait? Can we just do a 
plain wait? Otherwise, +1.. I'll commit myself with interruptible wait if you 
don't get to it [~Apache9] Thanks.

> Fix flaky TestSimpleRpcScheduler
> 
>
> Key: HBASE-15360
> URL: https://issues.apache.org/jira/browse/HBASE-15360
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15360.patch
>
>
> There were several flaky tests added there as part of HBASE-15306 and likely 
> HBASE-15136.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)