[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-11-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256612#comment-16256612
 ] 

ramkrishna.s.vasudevan commented on HBASE-19112:


bq.The cells what we get from Result is Cell type and we can not change the 
return type there as Result is public exposed.
In CPs I think we can actually manage as how PostGet hook does. It actually 
passes the List to the post hook and only after that the Result is 
created. We can do similar for postScan and postIncr and postAppend hooks. And 
later create the Result out of it. 
It needs a CP hook change and I think that can be done now in 2.0?

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19291) Use common header and footer for JSP pages

2017-11-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19291:
-
Description: 
Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
(copy-paste of code)

(Been sitting in my local repo for long, best to get following pesky 
user-facing things fixed before the next major release)
Misc edits:
- Due to redundancy, new additions make it to some places but not others. For 
eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
- Fix processMaster.jsp wrongly pointing to rs-status instead of master-status 
(probably due to copy paste from processRS.jsp)
- Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
- Added missing  tag in snapshot.jsp
- Deleted fossils of html5shiv.js. It's uses and the js itself were deleted in 
the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
- Fixed wrongly matched heading tags
- Deleted some unused variables


Tested:
Ran standalone cluster and opened each page to make sure it looked right.

Sidenote:
Looks like HBASE-3835 started the work of converting from jsp to jamon, but the 
work didn't finish. Now we have a mix of jsp and jamon. Needs reconciling, but 
later.

  was:
Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
(copy-paste of code)
(Been sitting in my local repo for long, best to get those pesky user-facing 
things fixed before the next major release)

Misc edits:
- Due to redundancy, new additions make it to some places but not others. For 
eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
- Fix processMaster.jsp wrongly pointing to rs-status instead of master-status 
(probably due to copy paste from processRS.jsp)
- Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
- Added missing  tag in snapshot.jsp
- Deleted fossils of html5shiv.js. It's uses and the js itself were deleted in 
the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
- Fixed wrongly matched heading tags
- Deleted some unused variables


Tested:
Ran standalone cluster and opened each page to make sure it looked right.

Sidenote:
Looks like HBASE-3835 started the work of converting from jsp to jamon, but the 
work didn't finish. Now we have a mix of jsp and jamon. Needs reconciling, but 
later.


> Use common header and footer for JSP pages
> --
>
> Key: HBASE-19291
> URL: https://issues.apache.org/jira/browse/HBASE-19291
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19291.master.001.patch
>
>
> Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
> (copy-paste of code)
> (Been sitting in my local repo for long, best to get following pesky 
> user-facing things fixed before the next major release)
> Misc edits:
> - Due to redundancy, new additions make it to some places but not others. For 
> eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
> - Fix processMaster.jsp wrongly pointing to rs-status instead of 
> master-status (probably due to copy paste from processRS.jsp)
> - Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
> - Added missing  tag in snapshot.jsp
> - Deleted fossils of html5shiv.js. It's uses and the js itself were deleted 
> in the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
> - Fixed wrongly matched heading tags
> - Deleted some unused variables
> Tested:
> Ran standalone cluster and opened each page to make sure it looked right.
> Sidenote:
> Looks like HBASE-3835 started the work of converting from jsp to jamon, but 
> the work didn't finish. Now we have a mix of jsp and jamon. Needs 
> reconciling, but later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19292) Numerical comparison support for column value - cell

2017-11-16 Thread Kayla Fisher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kayla Fisher updated HBASE-19292:
-
Description: 
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882

According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible.
 
Currently i am unable to follow your spectacular project. I have read your 
source code of the BinaryComparator, it's really vague for newbie in java.
Could you please add support for numerical comparison or give me some advice to 
conform my needs.:D


  was:
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882

According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible.
 
Currently i am unable to follow your spectacular project. I have read your 
source code of the BinaryComparator, it's really vague for newbie in java.
Could you please add support for numerical comparison or give some advice to 
conform my needs.



> Numerical comparison support for column value - cell
> 
>
> Key: HBASE-19292
> URL: https://issues.apache.org/jira/browse/HBASE-19292
> Project: HBase
>  Issue Type: Wish
>  Components: Filters
>Affects Versions: 1.3.1
> Environment: may not be related
>Reporter: Kayla Fisher
>  Labels: features, newbie, patch
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I've gotten the following data set:
> rowkey1 test:duration =43425
> rowkey2 test:duration = 5000
> rowkey3 test:duration = 90
> rowkey4 test:duration =8882
> According to your filter languages, if i want the data in a specific duration 
> range,e.g. 2000 like this:
> {FILTER => "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') 
> AND SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
> Finally got nothing, since i foud out that the comparator i used is 
> BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
> little incredible.
>  
> Currently i am unable to follow your spectacular project. I have read your 
> source code of the BinaryComparator, it's really vague for newbie in java.
> Could you please add support for numerical comparison or give me some advice 
> to conform my needs.:D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19291) Use common header and footer for JSP pages

2017-11-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19291:
-
Description: 
Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
(copy-paste of code)
(Been sitting in my local repo for long, best to get those pesky user-facing 
things fixed before the next major release)

Misc edits:
- Due to redundancy, new additions make it to some places but not others. For 
eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
- Fix processMaster.jsp wrongly pointing to rs-status instead of master-status 
(probably due to copy paste from processRS.jsp)
- Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
- Added missing  tag in snapshot.jsp
- Deleted fossils of html5shiv.js. It's uses and the js itself were deleted in 
the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
- Fixed wrongly matched heading tags
- Deleted some unused variables


Tested:
Ran standalone cluster and opened each page to make sure it looked right.

Sidenote:
Looks like HBASE-3835 started the work of converting from jsp to jamon, but the 
work didn't finish. Now we have a mix of jsp and jamon. Needs reconciling, but 
later.

  was:
Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
(copy-paste of code)

Misc edits:
- Due to redundancy, new additions make it to some places but not others. For 
eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
- Fix processMaster.jsp wrongly pointing to rs-status instead of master-status 
(probably due to copy paste from processRS.jsp)
- Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
- Added missing  tag in snapshot.jsp
- Deleted fossils of html5shiv.js. It's uses and the js itself were deleted in 
the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
- Fixed wrongly matched heading tags
- Deleted some unused variables


Tested:
Ran standalone cluster and opened each page to make sure it looked right.

Sidenote:
Looks like HBASE-3835 started the work of converting from jsp to jamon, but the 
work didn't finish. Now we have a mix of jsp and jamon. Needs reconciling, but 
later.


> Use common header and footer for JSP pages
> --
>
> Key: HBASE-19291
> URL: https://issues.apache.org/jira/browse/HBASE-19291
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19291.master.001.patch
>
>
> Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
> (copy-paste of code)
> (Been sitting in my local repo for long, best to get those pesky user-facing 
> things fixed before the next major release)
> Misc edits:
> - Due to redundancy, new additions make it to some places but not others. For 
> eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
> - Fix processMaster.jsp wrongly pointing to rs-status instead of 
> master-status (probably due to copy paste from processRS.jsp)
> - Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
> - Added missing  tag in snapshot.jsp
> - Deleted fossils of html5shiv.js. It's uses and the js itself were deleted 
> in the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
> - Fixed wrongly matched heading tags
> - Deleted some unused variables
> Tested:
> Ran standalone cluster and opened each page to make sure it looked right.
> Sidenote:
> Looks like HBASE-3835 started the work of converting from jsp to jamon, but 
> the work didn't finish. Now we have a mix of jsp and jamon. Needs 
> reconciling, but later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19292) Numerical comparison support for column value - cell

2017-11-16 Thread Kayla Fisher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kayla Fisher updated HBASE-19292:
-
Description: 
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882

According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible.
 
Currently i am unable to follow your spectacular project. I have read your 
source code of the BinaryComparator, it's really vague for newbie in java.
Could you please add support for numerical comparison or give some advice to 
conform my needs.


  was:
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882
According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible. 
Currently i am unable to follow your spectacular project. I have read your 
source code of the BinaryComparator, it's really vague for newbie in java.
Could you please add support for numerical comparison or give some advice to 
conform my needs.



> Numerical comparison support for column value - cell
> 
>
> Key: HBASE-19292
> URL: https://issues.apache.org/jira/browse/HBASE-19292
> Project: HBase
>  Issue Type: Wish
>  Components: Filters
>Affects Versions: 1.3.1
> Environment: may not be related
>Reporter: Kayla Fisher
>  Labels: features, newbie, patch
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I've gotten the following data set:
> rowkey1 test:duration =43425
> rowkey2 test:duration = 5000
> rowkey3 test:duration = 90
> rowkey4 test:duration =8882
> According to your filter languages, if i want the data in a specific duration 
> range,e.g. 2000 like this:
> {FILTER => "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') 
> AND SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
> Finally got nothing, since i foud out that the comparator i used is 
> BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
> little incredible.
>  
> Currently i am unable to follow your spectacular project. I have read your 
> source code of the BinaryComparator, it's really vague for newbie in java.
> Could you please add support for numerical comparison or give some advice to 
> conform my needs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19291) Use common header and footer for JSP pages

2017-11-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19291:
-
Status: Patch Available  (was: Open)

> Use common header and footer for JSP pages
> --
>
> Key: HBASE-19291
> URL: https://issues.apache.org/jira/browse/HBASE-19291
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19291.master.001.patch
>
>
> Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
> (copy-paste of code)
> Misc edits:
> - Due to redundancy, new additions make it to some places but not others. For 
> eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
> - Fix processMaster.jsp wrongly pointing to rs-status instead of 
> master-status (probably due to copy paste from processRS.jsp)
> - Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
> - Added missing  tag in snapshot.jsp
> - Deleted fossils of html5shiv.js. It's uses and the js itself were deleted 
> in the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
> - Fixed wrongly matched heading tags
> - Deleted some unused variables
> Tested:
> Ran standalone cluster and opened each page to make sure it looked right.
> Sidenote:
> Looks like HBASE-3835 started the work of converting from jsp to jamon, but 
> the work didn't finish. Now we have a mix of jsp and jamon. Needs 
> reconciling, but later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19292) Numerical comparison support for column value - cell

2017-11-16 Thread Kayla Fisher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kayla Fisher updated HBASE-19292:
-
Labels: features newbie patch  (was: patch)

> Numerical comparison support for column value - cell
> 
>
> Key: HBASE-19292
> URL: https://issues.apache.org/jira/browse/HBASE-19292
> Project: HBase
>  Issue Type: Wish
>  Components: Filters
>Affects Versions: 1.3.1
> Environment: may not be related
>Reporter: Kayla Fisher
>  Labels: features, newbie, patch
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I've gotten the following data set:
> rowkey1 test:duration =43425
> rowkey2 test:duration = 5000
> rowkey3 test:duration = 90
> rowkey4 test:duration =8882
> According to your filter languages, if i want the data in a specific duration 
> range,e.g. 2000 like this:
> {FILTER => "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') 
> AND SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
> Finally got nothing, since i foud out that the comparator i used is 
> BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
> little incredible. 
> Currently i am unable to follow your spectacular project. I have read your 
> source code of the BinaryComparator, it's really vague for newbie in java.
> Could you please add support for numerical comparison or give some advice to 
> conform my needs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19292) Numerical comparison support for column value - cell

2017-11-16 Thread Kayla Fisher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kayla Fisher updated HBASE-19292:
-
Description: 
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882
According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible. 
Currently i am unable to follow your spectacular project. I have read your 
source code of the BinaryComparator, it's really vague for newbie in java.
Could you please add support for numerical comparison or give some advice to 
conform my needs.


  was:
I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882
According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible. 
Currently i am unable to follow your spectacular project in java, could you 
please add support for numerical comparison or give some advice to conform my 
needs.



> Numerical comparison support for column value - cell
> 
>
> Key: HBASE-19292
> URL: https://issues.apache.org/jira/browse/HBASE-19292
> Project: HBase
>  Issue Type: Wish
>  Components: Filters
>Affects Versions: 1.3.1
> Environment: may not be related
>Reporter: Kayla Fisher
>  Labels: patch
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I've gotten the following data set:
> rowkey1 test:duration =43425
> rowkey2 test:duration = 5000
> rowkey3 test:duration = 90
> rowkey4 test:duration =8882
> According to your filter languages, if i want the data in a specific duration 
> range,e.g. 2000 like this:
> {FILTER => "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') 
> AND SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
> Finally got nothing, since i foud out that the comparator i used is 
> BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
> little incredible. 
> Currently i am unable to follow your spectacular project. I have read your 
> source code of the BinaryComparator, it's really vague for newbie in java.
> Could you please add support for numerical comparison or give some advice to 
> conform my needs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18911) Unify Admin and AsyncAdmin's methods name

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256605#comment-16256605
 ] 

Hudson commented on HBASE-18911:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4066 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4066/])
HBASE-18911 Unify Admin and AsyncAdmin's methods name (zghao: rev 
52273aa8f3221e11489004bacba4f4b6eb05f5c3)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncDecommissionAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncToolAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (add) 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestInterfaceAlign.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncClusterAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcedureAdminApi.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncReplicationAdminApiWithClusters.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java


> Unify Admin and AsyncAdmin's methods name
> -
>
> Key: HBASE-18911
> URL: https://issues.apache.org/jira/browse/HBASE-18911
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18911.master.001.patch, 
> HBASE-18911.master.002.patch, HBASE-18911.master.003.patch, 
> HBASE-18911.master.004.patch
>
>
> Different Methods
> || AsyncAdmin || Admin || unified name ||
> | listTables | listTableDescriptors | listTableDescriptors |
> | getOnlineRegions | getRegions | getRegions |
> | getTableRegions | getRegions | getRegions |
> | getTableDescriptor | getDescriptor | getDescriptor |
> | getRegionLoads | getRegionLoad | getRegionLoads |
> | execProcedureWithRet | execProcedureWithReturn | execProcedureWithReturn |
> | setNormalizerOn | normalizerSwitch | normalizerSwitch |
> | isNormalizerOn | isNormalizerEnabled | isNormalizerEnabled |
> | setBalancerOn | balancerSwitch | balancerSwitch |
> | isBalancerOn | isBalancerEnabled | isBalancerEnabled |
> | setCleanerChoreOn | cleanerChoreSwitch | cleanerChoreSwitch |
> | isCleanerChoreOn | isCleanerChoreEnabled | isCleanerChoreEnabled |
> | setCatalogJanitorOn | catalogJanitorSwitch | catalogJanitorSwitch |
> | isCatalogJanitorOn | isCatalogJanitorEnabled | isCatalogJanitorEnabled |
> | setSplitOn/setMergeOn | splitOrMergeEnabledSwitch | splitSwitch/mergeSwitch 
> |
> | isSplitOn/isMergeOn| isSplitOrMergeEnabled | isSplitEnabled/isMergeEnabled |
> Methods only in AsyncAdmin
> || AsyncAdmin ||
> | majorCompactRegionServer |
> | getMaster |
> | getBackupMasters |
> | getRegionServers |
> Methods only in Admin
> || Admin ||
> | listTableDescriptorsByNamespace |
> | listTableNamesByNamespace |
> | modifyTable |
> | getMasterCoprocessors |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19273) IntegrationTestBulkLoad#installSlowingCoproc() uses read-only HTableDescriptor

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256606#comment-16256606
 ] 

Hudson commented on HBASE-19273:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4066 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4066/])
HBASE-19273 IntegrationTestBulkLoad#installSlowingCoproc() uses (tedyu: rev 
e6e731cb861b6c2412ce06ee87a128998a502d66)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java


> IntegrationTestBulkLoad#installSlowingCoproc() uses read-only HTableDescriptor
> --
>
> Key: HBASE-19273
> URL: https://issues.apache.org/jira/browse/HBASE-19273
> Project: HBase
>  Issue Type: Test
>Reporter: Romil Choksi
>Assignee: Ted Yu
> Attachments: 19273.v1.txt, 19273.v2.txt
>
>
> [~romil.choksi] reported the following :
> {code}
> 2017-11-15 23:03:04,455 ERROR [main] util.AbstractHBaseTool: Error running 
> command-line tool
> java.lang.UnsupportedOperationException: HTableDescriptor is read-only
> at 
> org.apache.hadoop.hbase.client.ImmutableHTableDescriptor.getDelegateeForModification(ImmutableHTableDescriptor.java:59)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.addCoprocessor(HTableDescriptor.java:710)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.installSlowingCoproc(IntegrationTestBulkLoad.java:215)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:222)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:790)
> at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155)
> {code}
> This is due to read only descriptor being used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19292) Numerical comparison support for column value - cell

2017-11-16 Thread Kayla Fisher (JIRA)
Kayla Fisher created HBASE-19292:


 Summary: Numerical comparison support for column value - cell
 Key: HBASE-19292
 URL: https://issues.apache.org/jira/browse/HBASE-19292
 Project: HBase
  Issue Type: Wish
  Components: Filters
Affects Versions: 1.3.1
 Environment: may not be related
Reporter: Kayla Fisher


I've gotten the following data set:

rowkey1 test:duration =43425
rowkey2 test:duration = 5000
rowkey3 test:duration = 90
rowkey4 test:duration =8882
According to your filter languages, if i want the data in a specific duration 
range,e.g. 2000 "SingleColumnValueFilter('test', 'duration', <=, 'binary:1') AND 
SingleColumnValueFilter('test', 'duration', >=, 'binary:2000')"}
Finally got nothing, since i foud out that the comparator i used is 
BinaryComparator which compares by bits i.e, `90` is greater than `800`, a 
little incredible. 
Currently i am unable to follow your spectacular project in java, could you 
please add support for numerical comparison or give some advice to conform my 
needs.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-11-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256603#comment-16256603
 ] 

Anoop Sam John commented on HBASE-19112:


So the RawCell is CP exposed.  We have some APIs like getTags() there which CP 
users might want to use.  The issue is it is Cell type that flaws throughout 
the server code base. Even to CPs.   So CP users has to know that it is RawCell 
types actually coming in in all places and have to type cast and call these 
APIs.  This is ugly.
Otherwise we have to change the flows in server side so as to pass the RawCell 
types instead.  This also we can not do fully. Because some hooks like the 
postGet passes a Result to CP. The cells what we get from Result is Cell type 
and we can not change the return type there as Result is public exposed. Like 
that there may be conflicts in areas like Filter etc.  After all this will be 
huge change..  The idea I agree mostly. But not at all sure we can do for 2.0.
In the other issue of expose tags to CPs, me and Ram were discussing this same 
topic.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19291) Use common header and footer for JSP pages

2017-11-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19291:
-
Attachment: HBASE-19291.master.001.patch

> Use common header and footer for JSP pages
> --
>
> Key: HBASE-19291
> URL: https://issues.apache.org/jira/browse/HBASE-19291
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19291.master.001.patch
>
>
> Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
> (copy-paste of code)
> Misc edits:
> - Due to redundancy, new additions make it to some places but not others. For 
> eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
> - Fix processMaster.jsp wrongly pointing to rs-status instead of 
> master-status (probably due to copy paste from processRS.jsp)
> - Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
> - Added missing  tag in snapshot.jsp
> - Deleted fossils of html5shiv.js. It's uses and the js itself were deleted 
> in the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
> - Fixed wrongly matched heading tags
> - Deleted some unused variables
> Tested:
> Ran standalone cluster and opened each page to make sure it looked right.
> Sidenote:
> Looks like HBASE-3835 started the work of converting from jsp to jamon, but 
> the work didn't finish. Now we have a mix of jsp and jamon. Needs 
> reconciling, but later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19291) Use common header and footer for JSP pages

2017-11-16 Thread Appy (JIRA)
Appy created HBASE-19291:


 Summary: Use common header and footer for JSP pages
 Key: HBASE-19291
 URL: https://issues.apache.org/jira/browse/HBASE-19291
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Assignee: Appy


Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
(copy-paste of code)

Misc edits:
- Due to redundancy, new additions make it to some places but not others. For 
eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
- Fix processMaster.jsp wrongly pointing to rs-status instead of master-status 
(probably due to copy paste from processRS.jsp)
- Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
- Added missing  tag in snapshot.jsp
- Deleted fossils of html5shiv.js. It's uses and the js itself were deleted in 
the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
- Fixed wrongly matched heading tags
- Deleted some unused variables


Tested:
Ran standalone cluster and opened each page to make sure it looked right.

Sidenote:
Looks like HBASE-3835 started the work of converting from jsp to jamon, but the 
work didn't finish. Now we have a mix of jsp and jamon. Needs reconciling, but 
later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19092) Make Tag IA.LimitedPrivate and expose for CPs

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256598#comment-16256598
 ] 

stack commented on HBASE-19092:
---

Yeah, would be good if CPs didn't have to concern themselves w/ type.

ECB doesn't do BB-backed cells? Remind me how these get made?

+1 on next version of the patch (smile).

> Make Tag IA.LimitedPrivate and expose for CPs
> -
>
> Key: HBASE-19092
> URL: https://issues.apache.org/jira/browse/HBASE-19092
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19092-branch-2.patch, 
> HBASE-19092-branch-2_5.patch, HBASE-19092-branch-2_5.patch, 
> HBASE-19092.branch-2.0.02.patch, HBASE-19092_001-branch-2.patch, 
> HBASE-19092_001.patch, HBASE-19092_002-branch-2.patch, HBASE-19092_002.patch
>
>
> We need to make tags as LimitedPrivate as some use cases are trying to use 
> tags like timeline server. The same topic was discussed in dev@ and also in 
> HBASE-18995.
> Shall we target this for beta1 - cc [~saint@gmail.com].
> So once we do this all related Util methods and APIs should also move to 
> LimitedPrivate Util classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-11-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256592#comment-16256592
 ] 

Chia-Ping Tsai commented on HBASE-19112:


bq.  are you saying that getRowArray and getRowByteBuffer() both will be in 
Cell now?
I don't want to add the {{getRowByteBuffer}} to {{Cell}}, because it differ 
considerably from the those methods in {{Cell}}. Adding the bytebuffer to 
{{Cell}} will make a complicated interface.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256591#comment-16256591
 ] 

Hadoop QA commented on HBASE-19123:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
47s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hbase-server: The patch generated 0 new + 34 
unchanged - 3 fixed = 34 total (was 37) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  2m 
51s{color} | {color:red} patch has 20 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m 
30s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
7s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
48s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
29s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m  
5s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 
40s{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 
12s{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 15m 
48s{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 17m 
31s{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 19m  
5s{color} | {color:red} The patch causes 20 errors with Hadoop v3.0.0-alpha4. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Commented] (HBASE-19290) Reduce zk request when doing split log

2017-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256585#comment-16256585
 ] 

binlijin commented on HBASE-19290:
--

The patch is effective and we have test it on our 2000+ nodes production 
cluster.

> Reduce zk request when doing split log
> --
>
> Key: HBASE-19290
> URL: https://issues.apache.org/jira/browse/HBASE-19290
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-19290.master.001.patch
>
>
> We observe once the cluster has 1000+ nodes and when hundreds of nodes abort 
> and doing split log, the split is very very slow, and we find the 
> regionserver and master wait on the zookeeper response, so we need to reduce 
> zookeeper request and pressure for big cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19290) Reduce zk request when doing split log

2017-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-19290:
-
Attachment: HBASE-19290.master.001.patch

> Reduce zk request when doing split log
> --
>
> Key: HBASE-19290
> URL: https://issues.apache.org/jira/browse/HBASE-19290
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-19290.master.001.patch
>
>
> We observe once the cluster has 1000+ nodes and when hundreds of nodes abort 
> and doing split log, the split is very very slow, and we find the 
> regionserver and master wait on the zookeeper response, so we need to reduce 
> zookeeper request and pressure for big cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19290) Reduce zk request when doing split log

2017-11-16 Thread binlijin (JIRA)
binlijin created HBASE-19290:


 Summary: Reduce zk request when doing split log
 Key: HBASE-19290
 URL: https://issues.apache.org/jira/browse/HBASE-19290
 Project: HBase
  Issue Type: Improvement
Reporter: binlijin
Assignee: binlijin


We observe once the cluster has 1000+ nodes and when hundreds of nodes abort 
and doing split log, the split is very very slow, and we find the regionserver 
and master wait on the zookeeper response, so we need to reduce zookeeper 
request and pressure for big cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-11-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256576#comment-16256576
 ] 

ramkrishna.s.vasudevan commented on HBASE-19112:


bq.getRowXXX();
So seeing this getRowXXx() and other variants in Cell - are you saying that 
getRowArray and getRowByteBuffer() both will be in Cell now?

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19216) Use procedure to execute replication peer related operations

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256574#comment-16256574
 ] 

stack commented on HBASE-19216:
---

bq. But I would like to make the reportProcedureDone more general. So procedure 
id will always be presented, and also a serialized protobuf message. We can 
encode the peer id in the protobuf message?

Yeah. This makes sense. Finding a procedure with a pid would be best -- most 
general -- but we don't have a lookup at mo. Let me check it out. And then 
there suspend is done w/ the ProcedureEvent which is apart from Procedure (as 
you say above).

And I like the way you are trying to do a general soln because this 'bus', once 
open, will be flooded w/ all sorts of cluster messaging 

> Use procedure to execute replication peer related operations
> 
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19148) Edit of default configuration

2017-11-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256572#comment-16256572
 ] 

Guanghao Zhang commented on HBASE-19148:


+1 for 10. The request will fail fast for some exception, like 
RegionMovedException or NotServingException. So 3 is too small.

> Edit of default configuration
> -
>
> Key: HBASE-19148
> URL: https://issues.apache.org/jira/browse/HBASE-19148
> Project: HBase
>  Issue Type: Bug
>  Components: defaults
>Reporter: stack
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> Remove cruft and mythologies. Make descriptions more digestible. Change 
> defaults given experience.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256569#comment-16256569
 ] 

stack commented on HBASE-19123:
---

.003 addresses nice finding by [~anoop.hbase] up on rb.

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch, HBASE-19123.master.003.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19112) Suspect methods on Cell to be deprecated

2017-11-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256568#comment-16256568
 ] 

Chia-Ping Tsai commented on HBASE-19112:


The {{Cell}}, {{RawCell}}, and {{ExtendedCell}} are shown below.
{code}
@InterfaceAudience.Public
public interface Cell {
  getRowXXX();
  getFamilyXXX();
  getQualifierXXX();
  getTimestamp();
  getValueXXX();
}
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
public interface RawCell extends Cell {
  getType();
  getTagsXXX();
}
@InterfaceAudience.Private
public interface ExtendedCell extends RawCell, SettableSequenceId, 
SettableTimestamp, HeapSize,
Cloneable {
  getSequenceId();
  setXXX();
  write();
  ...
}
{code}

bq. What we propose instead of getType? What would user use instead?
The type of cells for normal scan (i.e non-raw scan) is always PUT type. Also, 
the raw scan user can call CellUtil.isXXX() to check the type.  Hence, the 
{{getType}} can be moved from {{Cell}} to {{RawCell}}. We should not expose the 
really byte data of type to user.

bq. Why not let out timestamp? Because it could be HLC timestamp?
It is ok to preserve the {{getTimestamp}} in {{Cell}} if the we don't change 
the format in recent release.

bq. There is no getTag in Cell. There is getTagsArray and getTagsOffset. We 
talking about deprecating these?
Yep.

bq. We want to hide sequencid? It is used-client side scanning. We'd just keep 
it hidden?
Just hidden. 

bq. I like saying that CPs can't modify a Cell that HBase has made.
As i see it, the {{ExtendedCell}} is not acceptable to be exposed to cp user 
because it's not a read-only interface. My philosophy is that the {{Cell}} and 
{{RawCell}} are the "view" (i.e read-only) to the normal and advanced user, and 
we should use {{ExtendedCell}} rather than {{Cell}} or {{RawCell}} internally. 
Also, the Put#add(cell) should change to accept the {{RawCell}}.

> Suspect methods on Cell to be deprecated
> 
>
> Key: HBASE-19112
> URL: https://issues.apache.org/jira/browse/HBASE-19112
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> [~chia7712] suggested on the [mailing 
> list|https://lists.apache.org/thread.html/e6de9af26d9b888a358ba48bf74655ccd893573087c032c0fcf01585@%3Cdev.hbase.apache.org%3E]
>  that we have some methods on Cell which should be deprecated for removal:
> * {{#getType()}}
> * {{#getTimestamp()}}
> * {{#getTag()}}
> * {{#getSequenceId()}}
> Let's make a pass over these (and maybe the rest) to make sure that there 
> aren't others which are either implementation details or methods returning 
> now-private-marked classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19123:
--
Attachment: HBASE-19123.master.003.patch

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch, HBASE-19123.master.003.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19148) Edit of default configuration

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256559#comment-16256559
 ] 

stack commented on HBASE-19148:
---

Yeah. It is sort of obnoxiously long. Too save. What you think [~zghaobac]? 
Lets set it now so time to evalute it before release? Set it to 3? 10?

> Edit of default configuration
> -
>
> Key: HBASE-19148
> URL: https://issues.apache.org/jira/browse/HBASE-19148
> Project: HBase
>  Issue Type: Bug
>  Components: defaults
>Reporter: stack
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> Remove cruft and mythologies. Make descriptions more digestible. Change 
> defaults given experience.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19123:
--
Hadoop Flags: Reviewed
Release Note: This issue removes the 'complete' facility that was in 
ObserverContext. It is no longer possible for a Coprocessor to cut the 
chain-of-invocation and insist its response prevails.
  Status: Patch Available  (was: Open)

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256545#comment-16256545
 ] 

stack commented on HBASE-19123:
---

Thanks [~Apache9] Let me update the dev discussion.

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256533#comment-16256533
 ] 

Hudson commented on HBASE-19252:


FAILURE: Integrated in Jenkins build HBase-2.0 #865 (See 
[https://builds.apache.org/job/HBase-2.0/865/])
HBASE-19252 Move the transform logic of FilterList into transformCell() 
(openinx: rev 57291108ed44568fe9e39ab4702d8e158c87273e)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java


> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch, HBASE-19252.v1.patch, 
> HBASE-19252.v2.patch, HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19273) IntegrationTestBulkLoad#installSlowingCoproc() uses read-only HTableDescriptor

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256534#comment-16256534
 ] 

Hudson commented on HBASE-19273:


FAILURE: Integrated in Jenkins build HBase-2.0 #865 (See 
[https://builds.apache.org/job/HBase-2.0/865/])
HBASE-19273 IntegrationTestBulkLoad#installSlowingCoproc() uses (tedyu: rev 
ade66e286861a86f9d7c99f9d6cf78f94b427848)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java


> IntegrationTestBulkLoad#installSlowingCoproc() uses read-only HTableDescriptor
> --
>
> Key: HBASE-19273
> URL: https://issues.apache.org/jira/browse/HBASE-19273
> Project: HBase
>  Issue Type: Test
>Reporter: Romil Choksi
>Assignee: Ted Yu
> Attachments: 19273.v1.txt, 19273.v2.txt
>
>
> [~romil.choksi] reported the following :
> {code}
> 2017-11-15 23:03:04,455 ERROR [main] util.AbstractHBaseTool: Error running 
> command-line tool
> java.lang.UnsupportedOperationException: HTableDescriptor is read-only
> at 
> org.apache.hadoop.hbase.client.ImmutableHTableDescriptor.getDelegateeForModification(ImmutableHTableDescriptor.java:59)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.addCoprocessor(HTableDescriptor.java:710)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.installSlowingCoproc(IntegrationTestBulkLoad.java:215)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:222)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:790)
> at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155)
> {code}
> This is due to read only descriptor being used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18911) Unify Admin and AsyncAdmin's methods name

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256532#comment-16256532
 ] 

Hudson commented on HBASE-18911:


FAILURE: Integrated in Jenkins build HBase-2.0 #865 (See 
[https://builds.apache.org/job/HBase-2.0/865/])
HBASE-18911 Unify Admin and AsyncAdmin's methods name (zghao: rev 
8b30adb834c22a491b1899233b55219befa87167)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncToolAdminApi.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncClusterAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncDecommissionAdminApi.java
* (add) 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestInterfaceAlign.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcedureAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncReplicationAdminApiWithClusters.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java


> Unify Admin and AsyncAdmin's methods name
> -
>
> Key: HBASE-18911
> URL: https://issues.apache.org/jira/browse/HBASE-18911
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18911.master.001.patch, 
> HBASE-18911.master.002.patch, HBASE-18911.master.003.patch, 
> HBASE-18911.master.004.patch
>
>
> Different Methods
> || AsyncAdmin || Admin || unified name ||
> | listTables | listTableDescriptors | listTableDescriptors |
> | getOnlineRegions | getRegions | getRegions |
> | getTableRegions | getRegions | getRegions |
> | getTableDescriptor | getDescriptor | getDescriptor |
> | getRegionLoads | getRegionLoad | getRegionLoads |
> | execProcedureWithRet | execProcedureWithReturn | execProcedureWithReturn |
> | setNormalizerOn | normalizerSwitch | normalizerSwitch |
> | isNormalizerOn | isNormalizerEnabled | isNormalizerEnabled |
> | setBalancerOn | balancerSwitch | balancerSwitch |
> | isBalancerOn | isBalancerEnabled | isBalancerEnabled |
> | setCleanerChoreOn | cleanerChoreSwitch | cleanerChoreSwitch |
> | isCleanerChoreOn | isCleanerChoreEnabled | isCleanerChoreEnabled |
> | setCatalogJanitorOn | catalogJanitorSwitch | catalogJanitorSwitch |
> | isCatalogJanitorOn | isCatalogJanitorEnabled | isCatalogJanitorEnabled |
> | setSplitOn/setMergeOn | splitOrMergeEnabledSwitch | splitSwitch/mergeSwitch 
> |
> | isSplitOn/isMergeOn| isSplitOrMergeEnabled | isSplitEnabled/isMergeEnabled |
> Methods only in AsyncAdmin
> || AsyncAdmin ||
> | majorCompactRegionServer |
> | getMaster |
> | getBackupMasters |
> | getRegionServers |
> Methods only in Admin
> || Admin ||
> | listTableDescriptorsByNamespace |
> | listTableNamesByNamespace |
> | modifyTable |
> | getMasterCoprocessors |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19148) Edit of default configuration

2017-11-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256489#comment-16256489
 ] 

Guanghao Zhang commented on HBASE-19148:


The default value of hbase.client.retries.number is 35. This can be a small 
value?
1. For our unit test, if we don't set this, the ut may hang a very long time.
2. In out production use case, some user didn't set this to small value, so 
some operation will hang a long time and don't throw the exception. It is hard 
for us to debug what happened.

> Edit of default configuration
> -
>
> Key: HBASE-19148
> URL: https://issues.apache.org/jira/browse/HBASE-19148
> Project: HBase
>  Issue Type: Bug
>  Components: defaults
>Reporter: stack
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> Remove cruft and mythologies. Make descriptions more digestible. Change 
> defaults given experience.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19092) Make Tag IA.LimitedPrivate and expose for CPs

2017-11-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256493#comment-16256493
 ] 

ramkrishna.s.vasudevan commented on HBASE-19092:


bq.We still need TagUtil?
Ya. Some methods are totally internal. I don't think they are of any use to CPs 
also.
bq.Should CPs know about CellBuilder types?
BuilderTypes are public. Ok on second thought I will remove the type and always 
pass DEEP_COPY.
bq.Could the CPEnv know what type to return? i.e. if context is offheap 
read/write, then offheap Cell?
CPEnv will now do always DEEP_COPY so it will be onheap cell only. Currently 
ExtendedCellBuilder does not support BBs and it has only byte[].


> Make Tag IA.LimitedPrivate and expose for CPs
> -
>
> Key: HBASE-19092
> URL: https://issues.apache.org/jira/browse/HBASE-19092
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19092-branch-2.patch, 
> HBASE-19092-branch-2_5.patch, HBASE-19092-branch-2_5.patch, 
> HBASE-19092.branch-2.0.02.patch, HBASE-19092_001-branch-2.patch, 
> HBASE-19092_001.patch, HBASE-19092_002-branch-2.patch, HBASE-19092_002.patch
>
>
> We need to make tags as LimitedPrivate as some use cases are trying to use 
> tags like timeline server. The same topic was discussed in dev@ and also in 
> HBASE-18995.
> Shall we target this for beta1 - cc [~saint@gmail.com].
> So once we do this all related Util methods and APIs should also move to 
> LimitedPrivate Util classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256480#comment-16256480
 ] 

Hadoop QA commented on HBASE-15320:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
4s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
57s{color} | {color:red} root: The patch generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
8s{color} | {color:red} hbase-kafka-proxy: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
49m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 46s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-15320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898103/HBASE-15320.master.6.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
shadedjars  hadoopcheck 

[jira] [Commented] (HBASE-19276) RegionPlan should correctly implement equals and hashCode

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256467#comment-16256467
 ] 

Hudson commented on HBASE-19276:


FAILURE: Integrated in Jenkins build HBase-1.5 #161 (See 
[https://builds.apache.org/job/HBase-1.5/161/])
HBASE-19276 RegionPlan should correctly implement equals and hashCode 
(apurtell: rev 1bde8656b121f7793bcd3acf5a97623536e9cedb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlan.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java


> RegionPlan should correctly implement equals and hashCode
> -
>
> Key: HBASE-19276
> URL: https://issues.apache.org/jira/browse/HBASE-19276
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: stack
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: HBASE-19276-branch-1.patch, 
> HBASE-19276.branch-1.001.patch, HBASE-19276.master.001.patch, 
> HBASE-19276.master.002.patch, HBASE-19276.patch
>
>
> error-prone identified dodgy code in AssignmentManager where we are relying 
> on reference (object) equality to do the right thing, and are getting lucky, 
> because if we properly used equals() the result is wrong, because RegionPlan 
> does not correctly implement equals and hashCode according to the JDK 
> contracts for same. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19288) Intermittent test failure in TestHStore.testRunDoubleMemStoreCompactors

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256442#comment-16256442
 ] 

Hadoop QA commented on HBASE-19288:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  6m 
40s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
63m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
30s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19288 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898092/19288.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 81eaf3b0cafd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5b13b624bb |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9880/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9880/console |
| Powered by | Apache 

[jira] [Assigned] (HBASE-19274) Log IOException when unable to determine the size of committed file

2017-11-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-19274:
--

Assignee: Guangxu Cheng

> Log IOException when unable to determine the size of committed file
> ---
>
> Key: HBASE-19274
> URL: https://issues.apache.org/jira/browse/HBASE-19274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
>Priority: Trivial
> Attachments: HBASE-19274.master.001.patch
>
>
> During troubleshooting of slow response, I saw the following in region server 
> log:
> {code}
> 2017-10-26 14:03:53,080 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed to find the size of hfile 
> hdfs://BETA/hbase/data/default/beta_b_history/e514111fae9d7ffc38ed48ad72fa197f/d/04d7c9fce73d4197be114448b1eb295a_SeqId_3766_
> {code}
> Here is related code:
> {code}
> } catch (IOException e) {
>   LOG.warn("Failed to find the size of hfile " + 
> commitedStoreFile);
> {code}
> The exception should also be logged to facilitate debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19274) Log IOException when unable to determine the size of committed file

2017-11-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19274:
---
Status: Patch Available  (was: Open)

> Log IOException when unable to determine the size of committed file
> ---
>
> Key: HBASE-19274
> URL: https://issues.apache.org/jira/browse/HBASE-19274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Trivial
> Attachments: HBASE-19274.master.001.patch
>
>
> During troubleshooting of slow response, I saw the following in region server 
> log:
> {code}
> 2017-10-26 14:03:53,080 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed to find the size of hfile 
> hdfs://BETA/hbase/data/default/beta_b_history/e514111fae9d7ffc38ed48ad72fa197f/d/04d7c9fce73d4197be114448b1eb295a_SeqId_3766_
> {code}
> Here is related code:
> {code}
> } catch (IOException e) {
>   LOG.warn("Failed to find the size of hfile " + 
> commitedStoreFile);
> {code}
> The exception should also be logged to facilitate debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256429#comment-16256429
 ] 

Duo Zhang commented on HBASE-19123:
---

+1 unless there are actual users who use this feature.

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19268) Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256423#comment-16256423
 ] 

Hudson commented on HBASE-19268:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4065 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4065/])
HBASE-19268 Enable Replica tests that were disabled by Proc-V2 AM in (stack: 
rev f7212aaebaa024e795707d7af60eb24760e2c55a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer2.java


> Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-19268
> URL: https://issues.apache.org/jira/browse/HBASE-19268
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19268.master.001.patch, 
> HBASE-19268.master.001.patch
>
>
> Reenable replica tests disabled so could land AMv2.
> In particular, reenable...
> Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> Enabling these tests used to be part of HBASE-18352 but we broke them out of 
> there so HBASE-18352 is only about fixing/reenabling 
> TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19181) LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256424#comment-16256424
 ] 

Hudson commented on HBASE-19181:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4065 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4065/])
HBASE-19181 LogRollBackupSubprocedure will fail if we use AsyncFSWAL (stack: 
rev 5b13b624bb3487c6a6805fd5368227ccbe357c7b)
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/regionserver/LogRollBackupSubprocedure.java


> LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog
> --
>
> Key: HBASE-19181
> URL: https://issues.apache.org/jira/browse/HBASE-19181
> Project: HBase
>  Issue Type: Bug
>  Components: backup
>Reporter: Duo Zhang
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19181-v1.patch, HBASE-19181-v2.patch
>
>
> In the RSRollLogTask it will cast a WAL to FSHLog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256425#comment-16256425
 ] 

Hudson commented on HBASE-19252:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4065 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4065/])
HBASE-19252 Move the transform logic of FilterList into transformCell() 
(openinx: rev d726492838729a6a0312baa826ea0999253e81db)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithAND.java


> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch, HBASE-19252.v1.patch, 
> HBASE-19252.v2.patch, HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19276) RegionPlan should correctly implement equals and hashCode

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256422#comment-16256422
 ] 

Hudson commented on HBASE-19276:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4065 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4065/])
HBASE-19276 RegionPlan should correctly implement equals and hashCode (stack: 
rev 6f9651b41741ac8c0d3ea0bb48cdeebf2101585e)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlan.java


> RegionPlan should correctly implement equals and hashCode
> -
>
> Key: HBASE-19276
> URL: https://issues.apache.org/jira/browse/HBASE-19276
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: stack
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: HBASE-19276-branch-1.patch, 
> HBASE-19276.branch-1.001.patch, HBASE-19276.master.001.patch, 
> HBASE-19276.master.002.patch, HBASE-19276.patch
>
>
> error-prone identified dodgy code in AssignmentManager where we are relying 
> on reference (object) equality to do the right thing, and are getting lucky, 
> because if we properly used equals() the result is wrong, because RegionPlan 
> does not correctly implement equals and hashCode according to the JDK 
> contracts for same. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256421#comment-16256421
 ] 

Hadoop QA commented on HBASE-19163:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m 
39s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898095/HBASE-19163.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux af8584ae64f1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5b13b624bb |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9881/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9881/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: 

[jira] [Updated] (HBASE-19274) Log IOException when unable to determine the size of committed file

2017-11-16 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-19274:
--
Attachment: HBASE-19274.master.001.patch

Attach a simple patch.Thanks

> Log IOException when unable to determine the size of committed file
> ---
>
> Key: HBASE-19274
> URL: https://issues.apache.org/jira/browse/HBASE-19274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Trivial
> Attachments: HBASE-19274.master.001.patch
>
>
> During troubleshooting of slow response, I saw the following in region server 
> log:
> {code}
> 2017-10-26 14:03:53,080 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed to find the size of hfile 
> hdfs://BETA/hbase/data/default/beta_b_history/e514111fae9d7ffc38ed48ad72fa197f/d/04d7c9fce73d4197be114448b1eb295a_SeqId_3766_
> {code}
> Here is related code:
> {code}
> } catch (IOException e) {
>   LOG.warn("Failed to find the size of hfile " + 
> commitedStoreFile);
> {code}
> The exception should also be logged to facilitate debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19274) Log IOException when unable to determine the size of committed file

2017-11-16 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256405#comment-16256405
 ] 

Guangxu Cheng commented on HBASE-19274:
---

I also encountered a similar issue. The exception message can tell us why 
failed to find the size of hfile and make easier to debug.

> Log IOException when unable to determine the size of committed file
> ---
>
> Key: HBASE-19274
> URL: https://issues.apache.org/jira/browse/HBASE-19274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Trivial
>
> During troubleshooting of slow response, I saw the following in region server 
> log:
> {code}
> 2017-10-26 14:03:53,080 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed to find the size of hfile 
> hdfs://BETA/hbase/data/default/beta_b_history/e514111fae9d7ffc38ed48ad72fa197f/d/04d7c9fce73d4197be114448b1eb295a_SeqId_3766_
> {code}
> Here is related code:
> {code}
> } catch (IOException e) {
>   LOG.warn("Failed to find the size of hfile " + 
> commitedStoreFile);
> {code}
> The exception should also be logged to facilitate debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19216) Use procedure to execute replication peer related operations

2017-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256394#comment-16256394
 ] 

Duo Zhang commented on HBASE-19216:
---

Yes, we need something like a ReplicationPeers to keep all the peers at master 
side, and also prevent concurrent modification on a give peer.

But I would like to make the reportProcedureDone more general. So procedure id 
will always be presented, and also a serialized protobuf message. We can encode 
the peer id in the protobuf message?

Anyway, it seems that we need to find the stored procedure event if we want to 
wake up a procedure. Got it. Let me try to implement a create peer procedure.

Thanks sir, help a lot.

> Use procedure to execute replication peer related operations
> 
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256390#comment-16256390
 ] 

Zheng Hu commented on HBASE-19252:
--

Pushed into branch-2. 

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch, HBASE-19252.v1.patch, 
> HBASE-19252.v2.patch, HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19123:
--
Attachment: HBASE-19123.master.002.patch

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch, 
> HBASE-19123.master.002.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19123:
--
Attachment: HBASE-19123.master.001.patch

> Purge 'complete' support from Coprocesor Observers
> --
>
> Key: HBASE-19123
> URL: https://issues.apache.org/jira/browse/HBASE-19123
> Project: HBase
>  Issue Type: Task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19123.master.001.patch
>
>
> Up on dev list under '[DISCUSSION] Removing the bypass semantic from the 
> Coprocessor APIs', we are discussing purge of 'complete'. Unless objection, 
> lets purge for beta-1.
> [~andrew.purt...@gmail.com] says the following up on the dev list:
> It would simplify the theory of operation for coprocessors if we can assume 
> either the entire chain will complete or one of the coprocessors in the chain 
> will throw an exception that not only terminates processing of the rest of 
> the chain but also the operation in progress.
> Security coprocessors interrupt processing by throwing an exception, which is 
> meant to propagate all the way back to the user.
> I think it's more than fair to ask the same question about 'complete' as we 
> did about 'bypass': Does anyone use it? Is it needed?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19269) Reenable TestShellRSGroups

2017-11-16 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256385#comment-16256385
 ] 

Guangxu Cheng commented on HBASE-19269:
---

boss,These ruby warnings are related. But I can't seem to understand why these 
definitions can't be found though the tests run fine.:(
I found similar usage in other place. I think the patch is good to go.

> Reenable TestShellRSGroups
> --
>
> Key: HBASE-19269
> URL: https://issues.apache.org/jira/browse/HBASE-19269
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: Guangxu Cheng
> Fix For: 2.0.0
>
> Attachments: HBASE-19269.master.001.patch, 
> HBASE-19269.master.002.patch
>
>
> It was disabled by the parent issue because RSGroups was failing. RSGroups 
> now works but this test is still failling. Need to dig in (signal from these 
> jruby tests is murky).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. The following code is primary table snapshot mapper 
initializatio: 

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
);

The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms  support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);

  was:
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. The following code is primary table snapshot mapper 
initializatio: 

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
);

The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: 

[jira] [Comment Edited] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256374#comment-16256374
 ] 

xinxin fan edited comment on HBASE-18090 at 11/17/17 2:38 AM:
--

Thank you [~stack], i have add a release note. another thing, i notice that 
there are many build fails recently, especially for branch-1 patch, here 
https://builds.apache.org/job/PreCommit-HBASE-Build. any reason?


was (Author: xinxin fan):
Thank you stack, i have add a release note. another thing, i notice that there 
are many build fails recently, especially for branch-1 patch, here 
https://builds.apache.org/job/PreCommit-HBASE-Build. any reason?

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256377#comment-16256377
 ] 

Zheng Hu commented on HBASE-19252:
--

bq. I thought this should be pushed to branch-2, too? 

Indeed so ,  Will commit into branch-2 ,  Thanks. 

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch, HBASE-19252.v1.patch, 
> HBASE-19252.v2.patch, HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256374#comment-16256374
 ] 

xinxin fan edited comment on HBASE-18090 at 11/17/17 2:37 AM:
--

Thank you stack, i have add a release note. another thing, i notice that there 
are many build fails recently, especially for branch-1 patch, here 
https://builds.apache.org/job/PreCommit-HBASE-Build. any reason?


was (Author: xinxin fan):
Thank you @stack, i have add a release note. another thing, i notice that there 
are many build fails recently, especially for branch-1 patch, here 
https://builds.apache.org/job/PreCommit-HBASE-Build. any reason?

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-19252:
-
Attachment: HBASE-19252-branch-1.4.v1.patch

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch, HBASE-19252.v1.patch, 
> HBASE-19252.v2.patch, HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256374#comment-16256374
 ] 

xinxin fan commented on HBASE-18090:


Thank you @stack, i have add a release note. another thing, i notice that there 
are many build fails recently, especially for branch-1 patch, here 
https://builds.apache.org/job/PreCommit-HBASE-Build. any reason?

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256371#comment-16256371
 ] 

Hadoop QA commented on HBASE-19114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 78 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  6m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hbase-client: The patch generated 0 new + 0 
unchanged - 199 fixed = 0 total (was 199) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} hbase-zookeeper: The patch generated 240 new + 0 
unchanged - 0 fixed = 240 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
15s{color} | {color:red} root: The patch generated 261 new + 1498 unchanged - 
296 fixed = 1759 total (was 1794) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} The patch hbase-client-project passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} The patch hbase-shaded-client-project passed 
checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} The patch hbase-assembly passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch hbase-it passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} hbase-replication: The patch generated 3 new + 24 
unchanged - 11 fixed = 27 total (was 35) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch hbase-rsgroup passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
3s{color} | {color:red} hbase-server: The patch generated 18 new + 1442 
unchanged - 86 fixed = 1460 total (was 1528) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} The patch hbase-shell passed checkstyle {color} |
| {color:green}+1{color} | {color:green} rubocop 

[jira] [Commented] (HBASE-19276) RegionPlan should correctly implement equals and hashCode

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256365#comment-16256365
 ] 

Hudson commented on HBASE-19276:


FAILURE: Integrated in Jenkins build HBase-1.4 #1021 (See 
[https://builds.apache.org/job/HBase-1.4/1021/])
HBASE-19276 RegionPlan should correctly implement equals and hashCode 
(apurtell: rev 12d7f08317859bc37303038cb2eed7702de24eae)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlan.java


> RegionPlan should correctly implement equals and hashCode
> -
>
> Key: HBASE-19276
> URL: https://issues.apache.org/jira/browse/HBASE-19276
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: stack
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: HBASE-19276-branch-1.patch, 
> HBASE-19276.branch-1.001.patch, HBASE-19276.master.001.patch, 
> HBASE-19276.master.002.patch, HBASE-19276.patch
>
>
> error-prone identified dodgy code in AssignmentManager where we are relying 
> on reference (object) equality to do the right thing, and are getting lucky, 
> because if we properly used equals() the result is wrong, because RegionPlan 
> does not correctly implement equals and hashCode according to the JDK 
> contracts for same. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. The following code is primary table snapshot mapper 
initializatio: 

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
);

The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);

  was:
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. 

The following code is primary table snapshot mapper initializatio: 
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
);

The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  

[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. 

The following code is primary table snapshot mapper initializatio: 
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
);

The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);

  was:
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19181) LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog

2017-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256358#comment-16256358
 ] 

Duo Zhang commented on HBASE-19181:
---

Belated +1.

> LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog
> --
>
> Key: HBASE-19181
> URL: https://issues.apache.org/jira/browse/HBASE-19181
> Project: HBase
>  Issue Type: Bug
>  Components: backup
>Reporter: Duo Zhang
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19181-v1.patch, HBASE-19181-v2.patch
>
>
> In the RSRollLogTask it will cast a WAL to FSHLog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:

TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);

  was:
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:
{code}
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);
{code}


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:
{code}
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);
{code}

  was:
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:

{code}
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);
{code}


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Release Note: 
In this task, we make it possible to run multiple mappers per region in the 
table snapshot. With this feature, client can specify the desired num of 
mappers when init table snapshot mapper job:

{code}
TableMapReduceUtil.initTableSnapshotMapperJob(
  snapshotName, // The name of the snapshot (of a 
table) to read from
  scan,  // Scan instance to 
control CF and attribute selection
  mapper, // mapper
  outputKeyClass,   // mapper output key 
  outputValueClass,// mapper output value
  job,   // The current job to 
adjust
  true, // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
  restoreDir,   // a temporary directory to 
copy the snapshot files into
  splitAlgorithm, // splitAlgo algorithm to split, 
current split algorithms only support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
  n // how many input splits to 
generate per one region
);
{code}

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19276) RegionPlan should correctly implement equals and hashCode

2017-11-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19276:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-1.4 and branch-1

> RegionPlan should correctly implement equals and hashCode
> -
>
> Key: HBASE-19276
> URL: https://issues.apache.org/jira/browse/HBASE-19276
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: stack
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: HBASE-19276-branch-1.patch, 
> HBASE-19276.branch-1.001.patch, HBASE-19276.master.001.patch, 
> HBASE-19276.master.002.patch, HBASE-19276.patch
>
>
> error-prone identified dodgy code in AssignmentManager where we are relying 
> on reference (object) equality to do the right thing, and are getting lucky, 
> because if we properly used equals() the result is wrong, because RegionPlan 
> does not correctly implement equals and hashCode according to the JDK 
> contracts for same. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19268) Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256326#comment-16256326
 ] 

Hudson commented on HBASE-19268:


FAILURE: Integrated in Jenkins build HBase-2.0 #864 (See 
[https://builds.apache.org/job/HBase-2.0/864/])
HBASE-19268 Enable Replica tests that were disabled by Proc-V2 AM in (stack: 
rev 9fecb3b2c8829ae614ba2724b82778cfab34f5ec)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer2.java


> Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-19268
> URL: https://issues.apache.org/jira/browse/HBASE-19268
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19268.master.001.patch, 
> HBASE-19268.master.001.patch
>
>
> Reenable replica tests disabled so could land AMv2.
> In particular, reenable...
> Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> Enabling these tests used to be part of HBASE-18352 but we broke them out of 
> there so HBASE-18352 is only about fixing/reenabling 
> TestMasterOperationsForRegionReplicas#testCreateTableWithMultipleReplicas



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19181) LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog

2017-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256327#comment-16256327
 ] 

Hudson commented on HBASE-19181:


FAILURE: Integrated in Jenkins build HBase-2.0 #864 (See 
[https://builds.apache.org/job/HBase-2.0/864/])
HBASE-19181 LogRollBackupSubprocedure will fail if we use AsyncFSWAL (stack: 
rev f7ce3fd48a6bf82159788d78427d03a7de2df390)
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/regionserver/LogRollBackupSubprocedure.java


> LogRollBackupSubprocedure will fail if we use AsyncFSWAL instead of FSHLog
> --
>
> Key: HBASE-19181
> URL: https://issues.apache.org/jira/browse/HBASE-19181
> Project: HBase
>  Issue Type: Bug
>  Components: backup
>Reporter: Duo Zhang
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19181-v1.patch, HBASE-19181-v2.patch
>
>
> In the RSRollLogTask it will cast a WAL to FSHLog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Description: 
TableSnapshotInputFormat runs one map task per region in the table snapshot. 
This places unnecessary restriction that the region layout of the original 
table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.


  was:
TableSnapshotInputFormat runs one map task per region in the table snapshot. 
This places unnecessary restriction that the region layout of the original 
table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.

With this feature, client can specify the desired num of mappers when init 
table snapshot mapper job:

{code}
TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName,)
{code}



> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18805) Unify Admin and AsyncAdmin

2017-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-18805.

Resolution: Fixed

All sub-tasks done.

> Unify Admin and AsyncAdmin
> --
>
> Key: HBASE-18805
> URL: https://issues.apache.org/jira/browse/HBASE-18805
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Balazs Meszaros
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
>
> Admin and AsyncAdmin differ some places:
> - some methods missing from AsyncAdmin (e.g. methods with String regex),
> - some methods have different names (listTables vs listTableDescriptors),
> - some method parameters are different (e.g. AsyncAdmin has Optional<> 
> parameters),
> - AsyncAdmin returns Lists instead of arrays (e.g. listTableNames),
> - unify Javadoc comments,
> - ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18912) Update Admin methods to return Lists instead of arrays

2017-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-18912.

Resolution: Won't Fix

As we need deprecate too many old methods. So if we can't find a better method 
name or don't have a good reason to deprecate the old methods, I thought we 
don't need to only change the return type from array to List... Resolve this as 
won't fix. Thanks

> Update Admin methods to return Lists instead of arrays
> --
>
> Key: HBASE-18912
> URL: https://issues.apache.org/jira/browse/HBASE-18912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18911) Unify Admin and AsyncAdmin's methods name

2017-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18911:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks all for reviewing.

> Unify Admin and AsyncAdmin's methods name
> -
>
> Key: HBASE-18911
> URL: https://issues.apache.org/jira/browse/HBASE-18911
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18911.master.001.patch, 
> HBASE-18911.master.002.patch, HBASE-18911.master.003.patch, 
> HBASE-18911.master.004.patch
>
>
> Different Methods
> || AsyncAdmin || Admin || unified name ||
> | listTables | listTableDescriptors | listTableDescriptors |
> | getOnlineRegions | getRegions | getRegions |
> | getTableRegions | getRegions | getRegions |
> | getTableDescriptor | getDescriptor | getDescriptor |
> | getRegionLoads | getRegionLoad | getRegionLoads |
> | execProcedureWithRet | execProcedureWithReturn | execProcedureWithReturn |
> | setNormalizerOn | normalizerSwitch | normalizerSwitch |
> | isNormalizerOn | isNormalizerEnabled | isNormalizerEnabled |
> | setBalancerOn | balancerSwitch | balancerSwitch |
> | isBalancerOn | isBalancerEnabled | isBalancerEnabled |
> | setCleanerChoreOn | cleanerChoreSwitch | cleanerChoreSwitch |
> | isCleanerChoreOn | isCleanerChoreEnabled | isCleanerChoreEnabled |
> | setCatalogJanitorOn | catalogJanitorSwitch | catalogJanitorSwitch |
> | isCatalogJanitorOn | isCatalogJanitorEnabled | isCatalogJanitorEnabled |
> | setSplitOn/setMergeOn | splitOrMergeEnabledSwitch | splitSwitch/mergeSwitch 
> |
> | isSplitOn/isMergeOn| isSplitOrMergeEnabled | isSplitEnabled/isMergeEnabled |
> Methods only in AsyncAdmin
> || AsyncAdmin ||
> | majorCompactRegionServer |
> | getMaster |
> | getBackupMasters |
> | getRegionServers |
> Methods only in Admin
> || Admin ||
> | listTableDescriptorsByNamespace |
> | listTableNamesByNamespace |
> | modifyTable |
> | getMasterCoprocessors |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256308#comment-16256308
 ] 

Guanghao Zhang commented on HBASE-19252:


I thought this should be pushed to branch-2, too?

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252.v1.patch, HBASE-19252.v2.patch, 
> HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19252:
---
Fix Version/s: (was: 2.0.0)
   2.0.0-beta-1
   3.0.0

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252.v1.patch, HBASE-19252.v2.patch, 
> HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256300#comment-16256300
 ] 

stack commented on HBASE-18090:
---

Patch is running here 
https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/9875/console

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.
> With this feature, client can specify the desired num of mappers when init 
> table snapshot mapper job:
> {code}
> TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Description: 
TableSnapshotInputFormat runs one map task per region in the table snapshot. 
This places unnecessary restriction that the region layout of the original 
table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.

With this feature, client can specify the desired num of mappers when init 
table snapshot mapper job:

{code}
TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName,)
{code}


  was:
TableSnapshotInputFormat runs one map task per region in the table snapshot. 
This places unnecessary restriction that the region layout of the original 
table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.




> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.
> With this feature, client can specify the desired num of mappers when init 
> table snapshot mapper job:
> {code}
> TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName,)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19216) Use procedure to execute replication peer related operations

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256287#comment-16256287
 ] 

stack edited comment on HBASE-19216 at 11/17/17 1:38 AM:
-

bq. For a peer change, I think it is idempotent, so we can retry forever if an 
RS fails to report in.

Ok. We just need to stop pinging if the server goes away.

bq. I plan to add a reportProcedureDone method in RegionServerStatusService

Ok. Should do for a few procedure types.

bq. How can I wake up a suspended procedure?

In Assign/Unassign, we have RegionStateNodes that have in them a reference to 
the Procedure that is manipulating the RS and an associated ProcedureEvent.  
Suspend/resume operates on the RSN PE. Before we dispatch an RPC, we do a 
suspend on the RSN PE. When RS has transitioned the Region, it updates master 
by calling reportRegionStateTransition.  Master finds the pertinent RSN using 
RegionInfo as key. We pull out the Procedure and call reportTransition on it. 
After updating state in the Procedure, the last thing done is a wake up call on 
the PE.

We'd have a registry of Peers in Master (ReplicationPeers?) keyed by peerid?. 
The Peer in Master would carry Procedure and PE reference.

Something like that.

bq. I need to create one by myself when suspending the procedure and store it 
in the procedure, so I can get it through the procedureId?

When we create a Peer, it would have in it a PE. The PE would not be created 
each time we want to do a suspend because we want to guard against having more 
than one operation going on against a Peer at a time. The key could be 
procedureid but could it be peerid instead?






was (Author: stack):
bq. For a peer change, I think it is idempotent, so we can retry forever if an 
RS fails to report in.

Ok. We just need to stop pinging if the server goes away.

bq. I plan to add a reportProcedureDone method in RegionServerStatusService

Ok. Should do for a few procedure types.

bq. How can I wake up a suspended procedure?

In Assign/Unassign, we have RegionStateNodes that have in them a reference to 
the Procedure that is manipulating the RS and an associated ProcedureEvent.  
Suspend/resume operates on the RSN PE. Before we dispatch an RPC, we do a 
suspend on the RSN PE. When RS has transitioned the Region, it updates master 
by calling reportRegionStateTransition.  Master finds the pertinent RSN using 
RegionInfo as key. We pull out the Procedure and call reportTransition on it. 
After updating state in the Procedure, the last thing done is a wake up call on 
the PE.

We'd have a registry of Peers in Master (ReplicationPeers?) keyed by peerid?. 
The Peer in Master would carry Procedure and PE reference.

Something like that.

bq. I need to create one by myself when suspending the procedure and store it 
in the procedure, so I can get it through the procedureId?

When we create a Peer, it would have in it a PE. The PE would not be created 
each time we want to do a suspend because we want to guard against having more 
than one operation going on against a Peer at a time. The key could be 
procedureid but could it be peerid instead?




So, setting peer would work like 


> Use procedure to execute replication peer related operations
> 
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-11-16 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-18090:
---
Description: 
TableSnapshotInputFormat runs one map task per region in the table snapshot. 
This places unnecessary restriction that the region layout of the original 
table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.



  was:TableSnapshotInputFormat runs one map task per region in the table 
snapshot. This places unnecessary restriction that the region layout of the 
original table needs to take the processing resources available to MR job into 
consideration. Allowing to run multiple mappers per region (assuming reasonably 
even key distribution) would be useful.


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Mikhail Antonov
>Assignee: xinxin fan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18090-V3-master.patch, 
> HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch, 
> HBASE-18090-branch-1-v2.patch, HBASE-18090-branch-1-v2.patch, 
> HBASE-18090-branch-1.3-v1.patch, HBASE-18090-branch-1.3-v2.patch, 
> HBASE-18090.branch-1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19216) Use procedure to execute replication peer related operations

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256287#comment-16256287
 ] 

stack commented on HBASE-19216:
---

bq. For a peer change, I think it is idempotent, so we can retry forever if an 
RS fails to report in.

Ok. We just need to stop pinging if the server goes away.

bq. I plan to add a reportProcedureDone method in RegionServerStatusService

Ok. Should do for a few procedure types.

bq. How can I wake up a suspended procedure?

In Assign/Unassign, we have RegionStateNodes that have in them a reference to 
the Procedure that is manipulating the RS and an associated ProcedureEvent.  
Suspend/resume operates on the RSN PE. Before we dispatch an RPC, we do a 
suspend on the RSN PE. When RS has transitioned the Region, it updates master 
by calling reportRegionStateTransition.  Master finds the pertinent RSN using 
RegionInfo as key. We pull out the Procedure and call reportTransition on it. 
After updating state in the Procedure, the last thing done is a wake up call on 
the PE.

We'd have a registry of Peers in Master (ReplicationPeers?) keyed by peerid?. 
The Peer in Master would carry Procedure and PE reference.

Something like that.

bq. I need to create one by myself when suspending the procedure and store it 
in the procedure, so I can get it through the procedureId?

When we create a Peer, it would have in it a PE. The PE would not be created 
each time we want to do a suspend because we want to guard against having more 
than one operation going on against a Peer at a time. The key could be 
procedureid but could it be peerid instead?




So, setting peer would work like 


> Use procedure to execute replication peer related operations
> 
>
> Key: HBASE-19216
> URL: https://issues.apache.org/jira/browse/HBASE-19216
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>
> When building the basic framework for HBASE-19064, I found that the 
> enable/disable peer is built upon the watcher of zk.
> The problem of using watcher is that, you do not know the exact time when all 
> RSes in the cluster have done the change, it is a 'eventually done'. 
> And for synchronous replication, when changing the state of a replication 
> peer, we need to know the exact time as we can only enable read/write after 
> that time. So I think we'd better use procedure to do this. Change the flag 
> on zk, and then execute a procedure on all RSes to reload the flag from zk.
> Another benefit is that, after the change, zk will be mainly used as a 
> storage, so it will be easy to implement another replication peer storage to 
> replace zk so that we can reduce the dependency on zk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19252) Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell

2017-11-16 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256277#comment-16256277
 ] 

Zheng Hu commented on HBASE-19252:
--

Pushed patch.v4 into master branch.  Patch for branch-1.4 will come soon.

> Move the transform logic of FilterList into transformCell() method to avoid 
> extra ref to question cell 
> ---
>
> Key: HBASE-19252
> URL: https://issues.apache.org/jira/browse/HBASE-19252
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Fix For: 2.0.0, 1.4.1
>
> Attachments: HBASE-19252.v1.patch, HBASE-19252.v2.patch, 
> HBASE-19252.v3.patch, HBASE-19252.v4.patch
>
>
> As [~anoop.hbase] and I discussed,  we can implement the filterKeyValue () 
> and transformCell() methods as following  to avoid saving transformedCell & 
> referenceCell state in FilterList, and we can avoid the costly cell clone. 
> {code}
> ReturnCode filterKeyValue(Cell c){
>   ReturnCode rc = null;
>   for(Filter filter: sub-filters){
>   // ...
>   rc = mergeReturnCode(rc, filter.filterKeyValue(c));
>   // ... 
>   }
>   return rc;
> }
> Cell transformCell(Cell c) throws IOException {
>   Cell transformed = c; 
>   for(Filter filter: sub-filters){
>   if(filter.filterKeyValue(c) is INCLUDE*) { //  > line#1
>   transformed = filter.transformCell(transformed);
> 
>   }
>   }
>   return transformed; 
> }
> {code}
> For line #1,  we need to remember the return code of the sub-filter for its 
> filterKeyValue().  because only INCLUDE*  ReturnCode,   we need to 
> transformCell for sub-filter.  
> A new boolean array will be introduced in FilterList.  and the cost of 
> maintaining  the boolean array will be less than  the cost of maintaining the 
> two ref of question cell. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15320) HBase connector for Kafka Connect

2017-11-16 Thread Mike Wingert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated HBASE-15320:
-
Attachment: HBASE-15320.master.6.patch

Bump kafka version to 1.0.0,  fix some white space issues.

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256237#comment-16256237
 ] 

huaxiang sun commented on HBASE-19163:
--

v003 patch addresses Stack's concern about performance, so it goes with option 
2.

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.003.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: (was: HBASE-19163.master.004.patch)

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.003.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: (was: HBASE-19163.master.002.patch)

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.003.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.004.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.003.patch, HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.003.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.003.patch, HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.002.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16480) Merge WALEdit and WALKey

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256230#comment-16256230
 ] 

stack commented on HBASE-16480:
---

Punting from hbase2. FIgure it is more trouble than benefit. Needs careful 
attention which it is not going to get this late in the 2.0 game.

> Merge WALEdit and WALKey
> 
>
> Key: HBASE-16480
> URL: https://issues.apache.org/jira/browse/HBASE-16480
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-16480.master.001.patch
>
>
> No need for separate classes: 
> {code}
> // TODO: Key and WALEdit are never used separately, or in one-to-many 
> relation, for practical
> //   purposes. They need to be merged into WALEntry.
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.REPLICATION)
> public class WALKey implements SequenceId, Comparable {
> {code}
> Will reduce garbage a little and simplify code. We can get rid of WAL.Entry 
> as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16480) Merge WALEdit and WALKey

2017-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16480:
--
Fix Version/s: (was: 2.0.0-beta-1)
   2.0.0

> Merge WALEdit and WALKey
> 
>
> Key: HBASE-16480
> URL: https://issues.apache.org/jira/browse/HBASE-16480
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-16480.master.001.patch
>
>
> No need for separate classes: 
> {code}
> // TODO: Key and WALEdit are never used separately, or in one-to-many 
> relation, for practical
> //   purposes. They need to be merged into WALEntry.
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.REPLICATION)
> public class WALKey implements SequenceId, Comparable {
> {code}
> Will reduce garbage a little and simplify code. We can get rid of WAL.Entry 
> as well. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18974) Document "Becoming a Committer"

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256227#comment-16256227
 ] 

Hadoop QA commented on HBASE-18974:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  5m 11s{color} 
| {color:red} HBASE-18974 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18974 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898090/HBASE-18974-copyedit-addendum.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9879/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Document "Becoming a Committer"
> ---
>
> Key: HBASE-18974
> URL: https://issues.apache.org/jira/browse/HBASE-18974
> Project: HBase
>  Issue Type: Bug
>  Components: community, documentation
>Reporter: Mike Drob
>Assignee: Mike Drob
> Attachments: HBASE-18974-copyedit-addendum.patch, HBASE-18974.patch, 
> HBASE-18974.v2.patch, HBASE-18974.v3.patch
>
>
> Based on the mailing list discussion at 
> https://lists.apache.org/thread.html/81c633cbe1f6f78421cbdad5b9549643c67803a723a9d86a513264c0@%3Cdev.hbase.apache.org%3E
>  it sounds like we should record some of the thoughts for future contributors 
> to refer to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19288) Intermittent test failure in TestHStore.testRunDoubleMemStoreCompactors

2017-11-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19288:
---
Status: Patch Available  (was: Open)

> Intermittent test failure in TestHStore.testRunDoubleMemStoreCompactors
> ---
>
> Key: HBASE-19288
> URL: https://issues.apache.org/jira/browse/HBASE-19288
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 19288.v1.txt, testRunDoubleMemStoreCompactors.out
>
>
> Here was one of the test failures: 
> https://builds.apache.org/job/PreCommit-HBASE-Build/9812/testReport/junit/org.apache.hadoop.hbase.regionserver/TestHStore/testRunDoubleMemStoreCompactors/
>  
> {code}
> [ERROR] 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRunDoubleMemStoreCompactors(org.apache.hadoop.hbase.regionserver.TestHStore)
> [ERROR]   Run 1: TestHStore.testRunDoubleMemStoreCompactors:1500 expected:<2> 
> but was:<3>
> [ERROR]   Run 2: TestHStore.testRunDoubleMemStoreCompactors:1481 expected:<1> 
> but was:<4>
> [ERROR]   Run 3: TestHStore.testRunDoubleMemStoreCompactors:1481 expected:<1> 
> but was:<5>
> {code}
> From the counts for second and third runs, we know that RUNNER_COUNT was not 
> cleared in between the reruns, leading to failure at the 1st assertion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19288) Intermittent test failure in TestHStore.testRunDoubleMemStoreCompactors

2017-11-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19288:
---
Attachment: 19288.v1.txt

Tentative patch adds debug log at the end of flushInMemory().
Also clears counter at the beginning of the test.
[~eshcar]:
Can you take a look ?

> Intermittent test failure in TestHStore.testRunDoubleMemStoreCompactors
> ---
>
> Key: HBASE-19288
> URL: https://issues.apache.org/jira/browse/HBASE-19288
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
> Attachments: 19288.v1.txt, testRunDoubleMemStoreCompactors.out
>
>
> Here was one of the test failures: 
> https://builds.apache.org/job/PreCommit-HBASE-Build/9812/testReport/junit/org.apache.hadoop.hbase.regionserver/TestHStore/testRunDoubleMemStoreCompactors/
>  
> {code}
> [ERROR] 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRunDoubleMemStoreCompactors(org.apache.hadoop.hbase.regionserver.TestHStore)
> [ERROR]   Run 1: TestHStore.testRunDoubleMemStoreCompactors:1500 expected:<2> 
> but was:<3>
> [ERROR]   Run 2: TestHStore.testRunDoubleMemStoreCompactors:1481 expected:<1> 
> but was:<4>
> [ERROR]   Run 3: TestHStore.testRunDoubleMemStoreCompactors:1481 expected:<1> 
> but was:<5>
> {code}
> From the counts for second and third runs, we know that RUNNER_COUNT was not 
> cleared in between the reruns, leading to failure at the 1st assertion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16574) Add backup / restore feature to refguide

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256220#comment-16256220
 ] 

Hadoop QA commented on HBASE-16574:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 43s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-16574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898061/HBASE-16574.008.branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 771a961d07db 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / 9fecb3b2c8 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9874/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9874/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9874/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9874/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Add backup / restore feature to refguide
> 
>
> Key: HBASE-16574
> URL: https://issues.apache.org/jira/browse/HBASE-16574
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Frank Welsch
>  Labels: backup
> Fix For: 2.0.0-beta-1
>
> Attachments: B command-line tools and configuration (updated).pdf, 
> Backup-and-Restore-Apache_19Sep2016.pdf, HBASE-16574.001.patch, 
> HBASE-16574.002.patch, HBASE-16574.003.branch-2.patch, 
> HBASE-16574.004.branch-2.patch, HBASE-16574.005.branch-2.patch, 
> HBASE-16574.006.branch-2.patch, HBASE-16574.007.branch-2.patch, 
> HBASE-16574.008.branch-2.patch, apache_hbase_reference_guide_004.pdf, 
> apache_hbase_reference_guide_007.pdf, apache_hbase_reference_guide_008.pdf, 
> hbase-book-16574.003.pdf, hbase_reference_guide.v1.pdf
>
>
> This issue is to add backup / restore feature description to hbase refguide.
> The description should cover:
> scenarios where backup / restore is used
> backup / restore commands and sample usage
> considerations in setup



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18974) Document "Becoming a Committer"

2017-11-16 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-18974:

Attachment: HBASE-18974-copyedit-addendum.patch

I kind of kept copyediting for a few sections after your changes. You can 
revert those lines if you feel like it's not necessary. Building locally to 
test now but wanted to get this to you. I'll amend and re-attach if it breaks 
anything.

> Document "Becoming a Committer"
> ---
>
> Key: HBASE-18974
> URL: https://issues.apache.org/jira/browse/HBASE-18974
> Project: HBase
>  Issue Type: Bug
>  Components: community, documentation
>Reporter: Mike Drob
>Assignee: Mike Drob
> Attachments: HBASE-18974-copyedit-addendum.patch, HBASE-18974.patch, 
> HBASE-18974.v2.patch, HBASE-18974.v3.patch
>
>
> Based on the mailing list discussion at 
> https://lists.apache.org/thread.html/81c633cbe1f6f78421cbdad5b9549643c67803a723a9d86a513264c0@%3Cdev.hbase.apache.org%3E
>  it sounds like we should record some of the thoughts for future contributors 
> to refer to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256202#comment-16256202
 ] 

stack commented on HBASE-17852:
---

Thanks for the pointer. I'd not read it previous. It does not answer my 
question though, "Why would you restore a backup system table from a snapshot 
when a 'backup' fails? Backups are of user-space tables. How does this impinge 
on the backup 'system' table?"

"in case if operation fails to restore meta - data consistency in a backup 
system table..."

Yeah, which operation? Which meta? A backup meta? What consistency needs to be 
maintained in the backup table?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-11-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256200#comment-16256200
 ] 

Sean Busbey commented on HBASE-19289:
-

Sounds like LocalFileSystem doesn't support flush/sync. That's odd. Let's do a 
quick check if Hadoop can provide that and then update tests to deal with it as 
necessary.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-16 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256193#comment-16256193
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 11/17/17 12:17 AM:
--

{quote}
Is there a writeup on how this all works? (It is not in the user-guide)
{quote}
Please, refer to a parent ticket for description what we perform in case of a 
failure
https://issues.apache.org/jira/browse/HBASE-15227

In a few words, we take backup system table snapshot before 
backup/merge/delete/ and restore this table from snapshot back
in case if operation fails to restore meta - data consistency in a backup 
system table


was (Author: vrodionov):
Please, refer to a parent ticket for description what we perform in case of a 
failure
https://issues.apache.org/jira/browse/HBASE-15227

In a few words, we take backup system table snapshot before 
backup/merge/delete/ and restore this table from snapshot back
in case if operation fails to restore meta - data consistency in a backup 
system table

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-16 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256193#comment-16256193
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

Please, refer to a parent ticket for description what we perform in case of a 
failure
https://issues.apache.org/jira/browse/HBASE-15227

In a few words, we take backup system table snapshot before 
backup/merge/delete/ and restore this table from snapshot back
in case if operation fails to restore meta - data consistency in a backup 
system table

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256183#comment-16256183
 ] 

stack commented on HBASE-17852:
---

bq. When backup fails, we restore backup system table from snapshot.

Why would you restore a backup system table from a snapshot when a 'backup' 
fails? Backups are of user-space tables. How does this impinge on the backup 
'system' table?

bq. If Observers write to the same table as general backup operation, some data 
from Observers may be lost when we restore table from snapshot. I thought, I 
explained that.

Where?

Is there a writeup on how this all works? (It is not in the user-guide)

bq. They are system from the point of view of a user. checkSystemTable checks 
backup system table.

This is going to confuse. 'system' tables have a particular meaning in hbase.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >