[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048728#comment-16048728
 ] 

Duo Zhang commented on HBASE-18213:
---

Yeah the documentation is for branch-2 but we will only generate documentation 
from master so the patch will be committed to master. What's the correct 
approach to set fix versions for this scenario?

Thanks.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18150) hbase.version file is created under both hbase.rootdir and hbase.wal.dir

2017-06-13 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048719#comment-16048719
 ] 

Xiang Li edited comment on HBASE-18150 at 6/14/17 5:25 AM:
---

Hi [~zyork] thanks very much for your time to review this JIRA. I should have 
read all comments of HBASE-17437. I will talk to Jerry to get more of his idea 
why he proposed to make the duplication. Thanks again!


was (Author: water):
[~zyork]

> hbase.version file is created under both hbase.rootdir and hbase.wal.dir
> 
>
> Key: HBASE-18150
> URL: https://issues.apache.org/jira/browse/HBASE-18150
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Critical
> Fix For: 1.4.0
>
> Attachments: HBASE-18150.branch-1.000.patch
>
>
> Branch-1 has HBASE-17437. When hbase.wal.dir is specified, hbase.version file 
> is created under both hbase.rootdir and hbase.wal.dir. The master branch does 
> not has the same issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18150) hbase.version file is created under both hbase.rootdir and hbase.wal.dir

2017-06-13 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048719#comment-16048719
 ] 

Xiang Li commented on HBASE-18150:
--

[~zyork]

> hbase.version file is created under both hbase.rootdir and hbase.wal.dir
> 
>
> Key: HBASE-18150
> URL: https://issues.apache.org/jira/browse/HBASE-18150
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Critical
> Fix For: 1.4.0
>
> Attachments: HBASE-18150.branch-1.000.patch
>
>
> Branch-1 has HBASE-17437. When hbase.wal.dir is specified, hbase.version file 
> is created under both hbase.rootdir and hbase.wal.dir. The master branch does 
> not has the same issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048712#comment-16048712
 ] 

stack commented on HBASE-18216:
---

Subsequently pushed an addendum to the patch. This change:

diff --git 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.jav
index c0540d9..fdbe03a 100644
--- 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
+++ 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
@@ -483,7 +483,7 @@ public class ProcedureWALFormatReader {
  */
 private static boolean isIncreasing(ProcedureProtos.Procedure current,
 ProcedureProtos.Procedure candidate) {
-  boolean increasing = current.getStackIdCount() < 
candidate.getStackIdCount() &&
+  boolean increasing = current.getStackIdCount() <= 
candidate.getStackIdCount() &&
 current.getLastUpdate() <= candidate.getLastUpdate();
   if (!increasing) {
 LOG.warn("NOT INCREASING! current=" + current + ", candidate=" + 
candidate);
@@ -868,4 +868,4 @@ public class ProcedureWALFormatReader {
   return (int)(Procedure.getProcIdHashCode(procId) % procedureMap.length);
 }
   }
-}
\ No newline at end of file
+}

Came of testing on cluster.

> [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
> 
>
> Key: HBASE-18216
> URL: https://issues.apache.org/jira/browse/HBASE-18216
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
>
> Let me commit workaround for the issue up in HBASE-18152, corruption in the 
> master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2017-06-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18216.
---
Resolution: Fixed

Pushed to master and branch-2.

> [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
> 
>
> Key: HBASE-18216
> URL: https://issues.apache.org/jira/browse/HBASE-18216
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
>
> Let me commit workaround for the issue up in HBASE-18152, corruption in the 
> master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048703#comment-16048703
 ] 

stack commented on HBASE-18216:
---

Pushed WORKAROUND from HBASE-18152 here, the patch named 
HBASE-18152.master.001.patch. It helps till we figure root cause.

> [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
> 
>
> Key: HBASE-18216
> URL: https://issues.apache.org/jira/browse/HBASE-18216
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
>
> Let me commit workaround for the issue up in HBASE-18152, corruption in the 
> master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18152) [AMv2] Corrupt Procedure WAL file; procedure data stored out of order

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048704#comment-16048704
 ] 

stack commented on HBASE-18152:
---

Pushed workaround over in HBASE-18216. Need to figure root cause still.

> [AMv2] Corrupt Procedure WAL file; procedure data stored out of order
> -
>
> Key: HBASE-18152
> URL: https://issues.apache.org/jira/browse/HBASE-18152
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18152.master.001.patch, 
> pv2-0036.log, pv2-0047.log, 
> reading_bad_wal.patch
>
>
> I've seen corruption from time-to-time testing.  Its rare enough. Often we 
> can get over it but sometimes we can't. It took me a while to capture an 
> instance of corruption. Turns out we are write to the WAL out-of-order which 
> undoes a basic tenet; that WAL content is ordered in line w/ execution.
> Below I'll post a corrupt WAL.
> Looking at the write-side, there is a lot going on. I'm not clear on how we 
> could write out of order. Will try and get more insight. Meantime parking 
> this issue here to fill data into.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18215) some advises about refactoring of rsgroup

2017-06-13 Thread chenxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-18215:
---
Attachment: HBASE-18215-1.2.4-v1.patch

here is the patch about our implementation.

> some advises about refactoring of rsgroup
> -
>
> Key: HBASE-18215
> URL: https://issues.apache.org/jira/browse/HBASE-18215
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
> Attachments: HBASE-18215-1.2.4-v1.patch
>
>
> recently we have Integrated rsgroup into our cluster,  after Integrated, 
> found some refactoring points. maybe the points were not right, but i think 
> there is a need to share with you guys.
> # when hbase.balancer.tablesOnMaster configured, RSGroupBasedLoadBalancer 
> should consider masterServer assignment first in balanceCluster, 
> roundRobinAssignment, retainAssignment and randomAssignment
>   do the same thing as BaseLoadBalancer
> # why not use a local file as the persistence layer instead of rsgroup table. 
> in our implementation, we first modify the local rsgroup file, then load the 
> group info into memory, after that execute the balancer command, everything 
> is OK.
> when loading do some sanity check:
> (1) one server can not be owned by multi group
> (2) one table can not be owned by multi group
> (3) if group has table, it must also has servers
> (4) default group must has servers in it
> if sanity check can’t pass, give up the following process.work as this, it 
> can greatly reduce the complexity of rsgroup implementation, there is no need 
> to wait for the rsgroup table to be online, and methods like moveServers, 
> moveTables, addRSGroup, removeRSGroup, moveServersAndTables can be removed 
> from RSGroupAdminService.only a refresh method is need(modify persistence 
> layer first and refresh the memory)
> # we should add some group informations on master web UI
> to do this, RSGroupBasedLoadBalancer should move to hbase-server module, 
> because MasterStatusTmpl.jamon depends on it
> # there may be some issues about RSGroupBasedLoadBalancer.roundRobinAssignment
> if two groups both include BOGUS_SERVER_NAME, assignments.putAll will 
> overwrite the previous data
> # there may be some issues about RSGroupBasedLoadBalancer.randomAssignment
> when the return value is BOGUS_SERVER_NAME, AM can not handle this case. we 
> should return null value instead of BOGUS_SERVER_NAME.
> # when RSGroupBasedLoadBalancer.balanceCluster execute, groups are balanced 
> one by one, if there are two many groups, we can do this in parallel.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2017-06-13 Thread stack (JIRA)
stack created HBASE-18216:
-

 Summary: [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
 Key: HBASE-18216
 URL: https://issues.apache.org/jira/browse/HBASE-18216
 Project: HBase
  Issue Type: Bug
  Components: proc-v2
Affects Versions: 2.0.0
Reporter: stack
Assignee: stack
 Fix For: 2.0.0


Let me commit workaround for the issue up in HBASE-18152, corruption in the 
master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18152) [AMv2] Corrupt Procedure WAL file; procedure data stored out of order

2017-06-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18152:
--
Attachment: (was: HBASE-17537.master.002.patch)

> [AMv2] Corrupt Procedure WAL file; procedure data stored out of order
> -
>
> Key: HBASE-18152
> URL: https://issues.apache.org/jira/browse/HBASE-18152
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-18152.master.001.patch, 
> pv2-0036.log, pv2-0047.log, 
> reading_bad_wal.patch
>
>
> I've seen corruption from time-to-time testing.  Its rare enough. Often we 
> can get over it but sometimes we can't. It took me a while to capture an 
> instance of corruption. Turns out we are write to the WAL out-of-order which 
> undoes a basic tenet; that WAL content is ordered in line w/ execution.
> Below I'll post a corrupt WAL.
> Looking at the write-side, there is a lot going on. I'm not clear on how we 
> could write out of order. Will try and get more insight. Meantime parking 
> this issue here to fill data into.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18215) some advises about refactoring of rsgroup

2017-06-13 Thread chenxu (JIRA)
chenxu created HBASE-18215:
--

 Summary: some advises about refactoring of rsgroup
 Key: HBASE-18215
 URL: https://issues.apache.org/jira/browse/HBASE-18215
 Project: HBase
  Issue Type: Improvement
  Components: Balancer
Reporter: chenxu


recently we have Integrated rsgroup into our cluster,  after Integrated, found 
some refactoring points. maybe the points were not right, but i think there is 
a need to share with you guys.
# when hbase.balancer.tablesOnMaster configured, RSGroupBasedLoadBalancer 
should consider masterServer assignment first in balanceCluster, 
roundRobinAssignment, retainAssignment and randomAssignment
  do the same thing as BaseLoadBalancer
# why not use a local file as the persistence layer instead of rsgroup table. 
in our implementation, we first modify the local rsgroup file, then load the 
group info into memory, after that execute the balancer command, everything is 
OK.
when loading do some sanity check:
(1) one server can not be owned by multi group
(2) one table can not be owned by multi group
(3) if group has table, it must also has servers
(4) default group must has servers in it
if sanity check can’t pass, give up the following process.work as this, it can 
greatly reduce the complexity of rsgroup implementation, there is no need to 
wait for the rsgroup table to be online, and methods like moveServers, 
moveTables, addRSGroup, removeRSGroup, moveServersAndTables can be removed from 
RSGroupAdminService.only a refresh method is need(modify persistence layer 
first and refresh the memory)
# we should add some group informations on master web UI
to do this, RSGroupBasedLoadBalancer should move to hbase-server module, 
because MasterStatusTmpl.jamon depends on it
# there may be some issues about RSGroupBasedLoadBalancer.roundRobinAssignment
if two groups both include BOGUS_SERVER_NAME, assignments.putAll will overwrite 
the previous data
# there may be some issues about RSGroupBasedLoadBalancer.randomAssignment
when the return value is BOGUS_SERVER_NAME, AM can not handle this case. we 
should return null value instead of BOGUS_SERVER_NAME.
# when RSGroupBasedLoadBalancer.balanceCluster execute, groups are balanced one 
by one, if there are two many groups, we can do this in parallel.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048692#comment-16048692
 ] 

stack commented on HBASE-18023:
---

I added you as a contributor [~dharju] 

I like [~clayb] suggestion. I think adding some threshold -- 1k items in a 
mutation probably merits a mention in the log (with perhaps a pointer to doc or 
issue on why many small batches will go down better than a few massive ones 
If log becomes annoying operator can up the threshold).

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Minor
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18214) Replace the folly::AtomicHashMap usage in the RPC layer

2017-06-13 Thread Devaraj Das (JIRA)
Devaraj Das created HBASE-18214:
---

 Summary: Replace the folly::AtomicHashMap usage in the RPC layer
 Key: HBASE-18214
 URL: https://issues.apache.org/jira/browse/HBASE-18214
 Project: HBase
  Issue Type: Sub-task
Reporter: Devaraj Das
Assignee: Devaraj Das


In my tests, I saw that folly::AtomicHashMap usage is not appropriate for one, 
rather common use case. It'd become sort of unusable (inserts would hang) after 
a bunch of inserts and erases. This hashmap is used to keep track of call-Id 
after a connection is set up in the RPC layer (insert a call-id/msg pair when 
an RPC is sent, and erase the pair when the corresponding response is 
received). Here is a simple program that will demonstrate the issue:
{code}
folly::AtomicHashMap f(100);
int i = 0;
while (i < 1) {
try {
  f.insert(i,100);
  LOG(INFO) << "Inserted " << i << "  " << f.size();
  f.erase(i);
  LOG(INFO) << "Deleted " << i << "  " << f.size();
  i++;
} catch (const std::exception ) {
  LOG(INFO) << "Exception " << e.what();
  break;
}
}
{code}
After poking around a little bit, it is indeed called out as a limitation here 
https://github.com/facebook/folly/blob/master/folly/docs/AtomicHashMap.md (grep 
for 'erase'). Proposal is to replace this with something that will fit in in 
the above usecase (thinking of using std::unordered_map).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048686#comment-16048686
 ] 

stack commented on HBASE-18213:
---

Async client not going to make 2.0.0? (You have fix version of 3.0.0).

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18160) Fix incorrect logic in FilterList.filterKeyValue

2017-06-13 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048670#comment-16048670
 ] 

Zheng Hu commented on HBASE-18160:
--

[~anoop.hbase], [~tedyu],  Any concerns for the patch ? 

> Fix incorrect  logic in FilterList.filterKeyValue
> -
>
> Key: HBASE-18160
> URL: https://issues.apache.org/jira/browse/HBASE-18160
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18160.branch-1.1.v1.patch, 
> HBASE-18160.branch-1.v1.patch, HBASE-18160.v1.patch, HBASE-18160.v2.patch, 
> HBASE-18160.v2.patch
>
>
> As HBASE-17678 said, there are two problems in FilterList.filterKeyValue 
> implementation: 
> 1.  FilterList did not consider INCLUDE_AND_SEEK_NEXT_ROW case( seems like 
> INCLUDE_AND_SEEK_NEXT_ROW is a newly added case, and the dev forgot to 
> consider FilterList), So if a user use INCLUDE_AND_SEEK_NEXT_ROW in his own 
> Filter and wrapped by a FilterList,  it'll  throw  an 
> IllegalStateException("Received code is not valid."). 
> 2.  For FilterList with MUST_PASS_ONE,   if filter-A in filter list return  
> INCLUDE and filter-B in filter list return INCLUDE_AND_NEXT_COL,   the 
> FilterList will return  INCLUDE_AND_NEXT_COL finally.  According to the 
> mininal step rule , It's incorrect.  (filter list with MUST_PASS_ONE choose 
> the mininal step among filters in filter list. Let's call it: The Mininal 
> Step Rule).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048669#comment-16048669
 ] 

Zheng Hu commented on HBASE-17678:
--

I think we should commit the patch into branch-1.1 & branch-1.2 & branch-1.3 .  
[~busbey] , [~mantonov], [~ndimiduk] , How do you think ? 

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. 

[jira] [Created] (HBASE-18213) Add documentation about the new async client

2017-06-13 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-18213:
-

 Summary: Add documentation about the new async client
 Key: HBASE-18213
 URL: https://issues.apache.org/jira/browse/HBASE-18213
 Project: HBase
  Issue Type: Sub-task
  Components: asyncclient, Client, documentation
Affects Versions: 3.0.0
Reporter: Duo Zhang
 Fix For: 3.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048665#comment-16048665
 ] 

Duo Zhang commented on HBASE-17678:
---

Is it OK to resolved issue?

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across 

[jira] [Updated] (HBASE-17008) Examples to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17008:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-2+. Thanks [~stack] for reviewing.

> Examples to make AsyncClient go down easy
> -
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048631#comment-16048631
 ] 

Zheng Hu commented on HBASE-17678:
--

It's my  fault .  Thanks [~Apache9] & [~tedyu] for the commit. 

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  

[jira] [Updated] (HBASE-17008) Examples to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17008:
--
Hadoop Flags: Reviewed
Release Note: Add two examples for async client. AsyncClientExample is a 
simple example to show you how to use AsyncTable. HttpProxyExample is an 
example for advance user to show you how to use RawAsyncTable to write a fully 
asynchronous HTTP proxy server. There is no extra thread pool, all operations 
are executed inside netty's event loop.

> Examples to make AsyncClient go down easy
> -
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Fix Version/s: (was: 3.0.0)

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: HBASE-18180-branch-1.patch

Ok, attached branch-1 patch. 

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17008) Examples to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17008:
--
Summary: Examples to make AsyncClient go down easy  (was: Examples, Doc, 
and Helper Classes to make AsyncClient go down easy)

> Examples to make AsyncClient go down easy
> -
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048580#comment-16048580
 ] 

Duo Zhang commented on HBASE-17008:
---

OK. Let me change the title here and commit the patch. Will open a new issue to 
track the documentation.

Thanks.

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: (was: HBASE-18180.patch)

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: (was: HBASE-18180-branch-1.patch)

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-13 Thread David Harju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048426#comment-16048426
 ] 

David Harju commented on HBASE-18023:
-

[~apurtell] [~lhofhansl] looks like I need to become a contributor in order to 
post my patch?  Do I get that status upgrade through you guys?

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Minor
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048410#comment-16048410
 ] 

stack commented on HBASE-17008:
---

What a sweet example (rawasynclient cast as an http server of a table... nice!)

+1 Nice. Put in release note what you added because I'm afraid folks will miss 
these goodies.

Agree on new issue for doc.

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16660) ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction

2017-06-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16660:
--
Priority: Critical  (was: Major)

> ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction
> -
>
> Key: HBASE-16660
> URL: https://issues.apache.org/jira/browse/HBASE-16660
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.20
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.23
>
> Attachments: HBASE-16660-0.98.patch, HBASE-16660.master.001.patch, 
> HBASE-16660.master.001.patch
>
>
> We get an ArrayIndexOutOfBoundsException during the major compaction check as 
> follows
> {noformat}
> 2016-09-19 05:04:18,287 ERROR [20.compactionChecker] 
> regionserver.HRegionServer$CompactionChecker - Caught exception
> java.lang.ArrayIndexOutOfBoundsException: -2
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.shouldPerformMajorCompaction(DateTieredCompactionPolicy.java:159)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.isMajorCompaction(HStore.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker.chore(HRegionServer.java:1532)
> at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This happens due to the following lines in 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.selectMajorCompaction
> {noformat}
> int lowerWindowIndex = Collections.binarySearch(boundaries,
> minTimestamp == null ? Long.MAX_VALUE : file.getMinimumTimestamp());
>   int upperWindowIndex = Collections.binarySearch(boundaries,
> file.getMaximumTimestamp() == null ? Long.MAX_VALUE : 
> file.getMaximumTimestamp());
> {noformat}
> These return negative values if the element is not found and in the case the 
> values are equal we get the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16660) ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction

2017-06-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16660:
--
Release Note: "Please do not use DateTieredCompaction with Major Compaction 
unless you have a version with this. Otherwise your cluster will not compact 
any store files and you can end up running out of file descriptors." @churro 
morales

> ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction
> -
>
> Key: HBASE-16660
> URL: https://issues.apache.org/jira/browse/HBASE-16660
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.20
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.23
>
> Attachments: HBASE-16660-0.98.patch, HBASE-16660.master.001.patch, 
> HBASE-16660.master.001.patch
>
>
> We get an ArrayIndexOutOfBoundsException during the major compaction check as 
> follows
> {noformat}
> 2016-09-19 05:04:18,287 ERROR [20.compactionChecker] 
> regionserver.HRegionServer$CompactionChecker - Caught exception
> java.lang.ArrayIndexOutOfBoundsException: -2
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.shouldPerformMajorCompaction(DateTieredCompactionPolicy.java:159)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.isMajorCompaction(HStore.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker.chore(HRegionServer.java:1532)
> at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This happens due to the following lines in 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.selectMajorCompaction
> {noformat}
> int lowerWindowIndex = Collections.binarySearch(boundaries,
> minTimestamp == null ? Long.MAX_VALUE : file.getMinimumTimestamp());
>   int upperWindowIndex = Collections.binarySearch(boundaries,
> file.getMaximumTimestamp() == null ? Long.MAX_VALUE : 
> file.getMaximumTimestamp());
> {noformat}
> These return negative values if the element is not found and in the case the 
> values are equal we get the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16660) ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction

2017-06-13 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048243#comment-16048243
 ] 

churro morales commented on HBASE-16660:


Please do not use DateTieredCompaction with Major Compaction unless you have a 
version with this.  Otherwise your cluster will not compact any store files and 
you can end up running out of file descriptors. 

> ArrayIndexOutOfBounds during the majorCompactionCheck in DateTieredCompaction
> -
>
> Key: HBASE-16660
> URL: https://issues.apache.org/jira/browse/HBASE-16660
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.20
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.23
>
> Attachments: HBASE-16660-0.98.patch, HBASE-16660.master.001.patch, 
> HBASE-16660.master.001.patch
>
>
> We get an ArrayIndexOutOfBoundsException during the major compaction check as 
> follows
> {noformat}
> 2016-09-19 05:04:18,287 ERROR [20.compactionChecker] 
> regionserver.HRegionServer$CompactionChecker - Caught exception
> java.lang.ArrayIndexOutOfBoundsException: -2
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.shouldPerformMajorCompaction(DateTieredCompactionPolicy.java:159)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.isMajorCompaction(HStore.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker.chore(HRegionServer.java:1532)
> at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This happens due to the following lines in 
> org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy.selectMajorCompaction
> {noformat}
> int lowerWindowIndex = Collections.binarySearch(boundaries,
> minTimestamp == null ? Long.MAX_VALUE : file.getMinimumTimestamp());
>   int upperWindowIndex = Collections.binarySearch(boundaries,
> file.getMaximumTimestamp() == null ? Long.MAX_VALUE : 
> file.getMaximumTimestamp());
> {noformat}
> These return negative values if the element is not found and in the case the 
> values are equal we get the exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048240#comment-16048240
 ] 

Sean Busbey commented on HBASE-18200:
-

(belated +1)

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048234#comment-16048234
 ] 

Sean Busbey commented on HBASE-18179:
-

+1

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-18179.patch, test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18137) Replication gets stuck for empty WALs

2017-06-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048222#comment-16048222
 ] 

Sean Busbey commented on HBASE-18137:
-

could you update the release note to include what the risk is for turning on 
the autorecovery? Otherwise it's not obvious why we have the default.

> Replication gets stuck for empty WALs
> -
>
> Key: HBASE-18137
> URL: https://issues.apache.org/jira/browse/HBASE-18137
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.1
>Reporter: Ashu Pachauri
>Assignee: Vincent Poon
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18137.branch-1.3.v1.patch, 
> HBASE-18137.branch-1.3.v2.patch, HBASE-18137.branch-1.3.v3.patch, 
> HBASE-18137.branch-1.v1.patch, HBASE-18137.branch-1.v2.patch, 
> HBASE-18137.master.v1.patch
>
>
> Replication assumes that only the last WAL of a recovered queue can be empty. 
> But, intermittent DFS issues may cause empty WALs being created (without the 
> PWAL magic), and a roll of WAL to happen without a regionserver crash. This 
> will cause recovered queues to have empty WALs in the middle. This cause 
> replication to get stuck:
> {code}
> TRACE regionserver.ReplicationSource: Opening log 
> WARN regionserver.ReplicationSource: - Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:197)
>   at java.io.DataInputStream.readFully(DataInputStream.java:169)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1915)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1880)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1829)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:70)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.reset(SequenceFileLogReader.java:168)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.initReader(SequenceFileLogReader.java:177)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:66)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:312)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {code}
> The WAL in question was completely empty but there were other WALs in the 
> recovered queue which were newer and non-empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-13 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-18212:


 Summary: In Standalone mode with local filesystem HBase logs 
Warning message:Failed to invoke 'unbuffer' method in class class 
org.apache.hadoop.fs.FSDataInputStream
 Key: HBASE-18212
 URL: https://issues.apache.org/jira/browse/HBASE-18212
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Umesh Agashe


New users may get nervous after seeing following warning level log messages 
(considering new users will most likely run HBase in Standalone mode first):
{code}
WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
there may be a TCP socket connection left open in CLOSE_WAIT state.
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18150) hbase.version file is created under both hbase.rootdir and hbase.wal.dir

2017-06-13 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048177#comment-16048177
 ] 

Zach York commented on HBASE-18150:
---

[~water] thanks for this. Sorry I missed the review of the patch. What is the 
problem with having these files under both directories? In fact, Jerry He 
wanted me to put these files under both, see: 
https://issues.apache.org/jira/browse/HBASE-17437?focusedCommentId=15814027=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15814027.
 However, I do think we should keep branch-1 and master in sync on this patch 
so this works.

> hbase.version file is created under both hbase.rootdir and hbase.wal.dir
> 
>
> Key: HBASE-18150
> URL: https://issues.apache.org/jira/browse/HBASE-18150
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Critical
> Fix For: 1.4.0
>
> Attachments: HBASE-18150.branch-1.000.patch
>
>
> Branch-1 has HBASE-17437. When hbase.wal.dir is specified, hbase.version file 
> is created under both hbase.rootdir and hbase.wal.dir. The master branch does 
> not has the same issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047976#comment-16047976
 ] 

Hudson commented on HBASE-17678:


FAILURE: Integrated in Jenkins build HBase-1.4 #775 (See 
[https://builds.apache.org/job/HBase-1.4/775/])
HBASE-17678 FilterList with MUST_PASS_ONE may lead to redundant cells 
(zhangduo: rev 256fc63007aecb63028b71ad1383d896f11db701)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java


> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> 

[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047926#comment-16047926
 ] 

Hadoop QA commented on HBASE-18179:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
5s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872859/HBASE-18179.patch |
| JIRA Issue | HBASE-18179 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 8f57b5efb61a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f5768b4 |
| shellcheck | v0.4.6 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7181/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-18179.patch, test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047925#comment-16047925
 ] 

Ted Yu commented on HBASE-18180:


You can attach branch-1 patch alone where TestLockProcedure is not flaky.

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch, HBASE-18180.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047919#comment-16047919
 ] 

Ted Yu commented on HBASE-18180:


TestLockProcedure is a small test.
Many tests were skipped.

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch, HBASE-18180.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047916#comment-16047916
 ] 

Hadoop QA commented on HBASE-18180:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
59m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.locking.TestLockProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872839/HBASE-18180.patch |
| JIRA Issue | HBASE-18180 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 898f1df93c24 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f5768b4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7180/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7180/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7180/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7180/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Possible connection leak while 

[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047914#comment-16047914
 ] 

Ted Yu commented on HBASE-17678:


The patch was missing issue number:
{code}
Subject: [PATCH] FilterList with MUST_PASS_ONE may lead to redundant cells
 returned
{code}
I forgot to add it when committing.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 

[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047882#comment-16047882
 ] 

Hadoop QA commented on HBASE-18179:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/7181/console in case of 
problems.


> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-18179.patch, test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18179:
--
Attachment: HBASE-18179.patch

There is no new 2.x release yet. Change 3.0.0-alpha2 to 3.0.0-alpha3.

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-18179.patch, test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047802#comment-16047802
 ] 

Duo Zhang commented on HBASE-17678:
---

Seems [~tedyu] has already pushed the patch to branch-1 but has missed the 
issue number [~openinx].

I've reverted the incorrect commit and committed again with the correct commit 
message.

Thanks.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that 

[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047790#comment-16047790
 ] 

Duo Zhang commented on HBASE-18179:
---

Seems it works. Let me prepare a patch for this issue.

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047764#comment-16047764
 ] 

Hadoop QA commented on HBASE-18179:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 25s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 173m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.coprocessor.TestCoprocessorMetrics |
|   | hadoop.hbase.master.procedure.TestMasterProcedureWalLease |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872821/test-branch-2.patch |
| JIRA Issue | HBASE-18179 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 7c3785f8fb6e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f5768b4 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7179/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7179/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7179/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7179/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047761#comment-16047761
 ] 

Pankaj Kumar edited comment on HBASE-18180 at 6/13/17 11:37 AM:


Reattaching same patch files for QA run. 


was (Author: pankaj2461):
Reattaching same patche files for QA run. 

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch, HBASE-18180.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: HBASE-18180.patch
HBASE-18180-branch-1.patch

Reattaching same patche files for QA run. 

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
> Attachments: HBASE-18180-branch-1.patch, HBASE-18180.patch
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: (was: HBASE-18180.patch)

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18180) Possible connection leak while closing BufferedMutator in TableOutputFormat

2017-06-13 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-18180:
-
Attachment: (was: HBASE-18180-branch-1.patch)

> Possible connection leak while closing BufferedMutator in TableOutputFormat
> ---
>
> Key: HBASE-18180
> URL: https://issues.apache.org/jira/browse/HBASE-18180
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 3.0.0, 1.4.0
>
>
> In TableOutputFormat, connection will not be released in case when 
> "mutator.close()" throws exception.
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat
> {code}
> public void close(TaskAttemptContext context)
> throws IOException {
>   mutator.close();
>   connection.close();
> }
> {code}
> org.apache.hadoop.hbase.mapred.TableOutputFormat
> {code}
> public void close(Reporter reporter) throws IOException {
>   this.m_mutator.close();
>   if (connection != null) {
> connection.close();
> connection = null;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047684#comment-16047684
 ] 

Hudson commented on HBASE-18200:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3187 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3187/])
HBASE-18200 Set hadoop check versions for branch-2 and branch-2.x in pre 
(zhangduo: rev f5768b4306afa676342663181f3b4e0c3f6a260d)
* (edit) dev-support/hbase-personality.sh


> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047670#comment-16047670
 ] 

Hadoop QA commented on HBASE-17008:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
76m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s 
{color} | {color:red} hbase-examples generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hbase-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
11s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 100m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-examples |
|  |  Null passed for non-null parameter of 
java.util.concurrent.CompletableFuture.completedFuture(Object) in 
org.apache.hadoop.hbase.client.example.AsyncClientExample.closeConn()  At 
AsyncClientExample.java:of 
java.util.concurrent.CompletableFuture.completedFuture(Object) in 
org.apache.hadoop.hbase.client.example.AsyncClientExample.closeConn()  At 
AsyncClientExample.java:[line 103] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872819/HBASE-17008-v1.patch |
| JIRA Issue | HBASE-17008 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux b8fe013737ee 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 384e308 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7178/artifact/patchprocess/new-findbugs-hbase-examples.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7178/testReport/ |
| modules | C: hbase-examples U: hbase-examples |
| Console output | 

[jira] [Updated] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18179:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18179) Add new hadoop releases to the pre commit hadoop check

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18179:
--
Attachment: test-branch-2.patch

Trigger a branch-2 build to see if we run the right hadoop version check.

> Add new hadoop releases to the pre commit hadoop check
> --
>
> Key: HBASE-18179
> URL: https://issues.apache.org/jira/browse/HBASE-18179
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
> Attachments: test-branch-2.patch
>
>
> 3.0.0-alpha3 is out, we should replace the old alpha2 release with alpha3. 
> And we should add new 2.x releases also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
Release Note: Allow setting different hadoop check versions for branch-2 
and branch-2.x when running pre commit check.  (was: Allow set different hadoop 
check versions for branch-2 and branch-2.x when running pre commit check.)

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
  Resolution: Fixed
Release Note: Allow set different hadoop check versions for branch-2 and 
branch-2.x when running pre commit check.
  Status: Resolved  (was: Patch Available)

Pushed to master.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047562#comment-16047562
 ] 

Duo Zhang commented on HBASE-17008:
---

[~stack] [~carp84] I think the example is enough.

For the documentation, I think we can add a new section right after '66.2. 
WriteBuffer and Batch Methods'. And I prefer to open a new issuse to address it.

Thanks.

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17994) Add async client test to Performance Evaluation tool

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17994:
--
Parent Issue: HBASE-17856  (was: HBASE-16833)

> Add async client test to Performance Evaluation tool
> 
>
> Key: HBASE-17994
> URL: https://issues.apache.org/jira/browse/HBASE-17994
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17008:
--
Affects Version/s: 2.0.0-alpha-1
   3.0.0
Fix Version/s: (was: 2.0.0)
   2.0.0-alpha-2
   3.0.0

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17008) Examples, Doc, and Helper Classes to make AsyncClient go down easy

2017-06-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17008:
--
Attachment: HBASE-17008-v1.patch

Add a HttpProxyEample to show how to use RawAsyncTable to write fully 
asynchronous code.

> Examples, Doc, and Helper Classes to make AsyncClient go down easy
> --
>
> Key: HBASE-17008
> URL: https://issues.apache.org/jira/browse/HBASE-17008
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17008.patch, HBASE-17008-v1.patch
>
>
> The parent issue is about delivering a new, async client. The new client 
> operates at a pretty low level. There will be questions on how best to use it.
> Some have come up already over in HBASE-16991. In particular, [~Apache9] and 
> [~carp84] talk about the tier they have to put on top of hbase because its 
> API is not user-friendly.
> This issue is about adding in the examples, docs., and helper classes we need 
> to make the new async client more palatable to mortals. See HBASE-16991 for 
> instance for an example of how to cache an AsyncConnection that an 
> application might make use of.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047514#comment-16047514
 ] 

Zheng Hu commented on HBASE-17678:
--

So,  Any concerns for patchs in branch-1* ? 

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but 

[jira] [Created] (HBASE-18211) Encryption of exisiting data in Stripe Compaction

2017-06-13 Thread Karthick (JIRA)
Karthick created HBASE-18211:


 Summary: Encryption of exisiting data in Stripe Compaction
 Key: HBASE-18211
 URL: https://issues.apache.org/jira/browse/HBASE-18211
 Project: HBase
  Issue Type: Bug
  Components: Compaction, encryption
Reporter: Karthick
Priority: Critical


We have a table which has time series data with Stripe Compaction enabled. 
After encryption has been enabled for this table the newer entries are 
encrypted and inserted. However to encrypt the existing data in the table, a 
major compaction has to run. Since, stripe compaction doesn't allow a major 
compaction to run, we are unable to encrypt the previous data. 

see this 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047497#comment-16047497
 ] 

Ted Yu commented on HBASE-17678:


Failed tests were not related to Filter.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not 

[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047495#comment-16047495
 ] 

Hadoop QA commented on HBASE-17678:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
50s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
14s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
19s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} hbase-common-jdk1.8.0_131 with JDK v1.8.0_131 generated 
0 new + 18 unchanged - 18 fixed = 18 total (was 36) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0_131. 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} hbase-server-jdk1.8.0_131 with JDK v1.8.0_131 generated 
0 new + 5 unchanged - 5 fixed = 5 total (was 10) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-common-jdk1.7.0_131 with JDK v1.7.0_131 generated 
0 new + 18 unchanged - 18 fixed = 18 total (was 36) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_131. 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} hbase-server-jdk1.7.0_131 with JDK v1.7.0_131 generated 
0 new + 5 unchanged - 5 fixed = 5 total (was 10) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 33s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} the patch passed {color} |
| 

[jira] [Commented] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047493#comment-16047493
 ] 

Duo Zhang commented on HBASE-18200:
---

Will commit shortly if no objections. Will continue the testing in HBASE-18179.

Thanks.

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18128) compaction marker could be skipped

2017-06-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047484#comment-16047484
 ] 

Hadoop QA commented on HBASE-18128:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 58s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 174m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.coprocessor.TestCoprocessorMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872798/HBASE-18128-master-v3.patch
 |
| JIRA Issue | HBASE-18128 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 5e2ce2dbfd94 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 384e308 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7176/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7176/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7176/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7176/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
>