[jira] [Commented] (HBASE-18489) Expose scan cursor in RawScanResultConsumer

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201497#comment-16201497
 ] 

Hudson commented on HBASE-18489:


FAILURE: Integrated in Jenkins build HBase-1.5 #95 (See 
[https://builds.apache.org/job/HBase-1.5/95/])
HBASE-18552 Backport the server side change in HBASE-18489 to branch-1 
(zhangduo: rev ff23e15769013050814b9dc674c65a430f24af36)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java


> Expose scan cursor in RawScanResultConsumer
> ---
>
> Key: HBASE-18489
> URL: https://issues.apache.org/jira/browse/HBASE-18489
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18489-addendum.patch, HBASE-18489-v1.patch, 
> HBASE-18489-v2.patch, HBASE-18489-v2.patch, HBASE-18489.patch
>
>
> The first step of supporting scan cursor for async client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18552) Backport the server side change in HBASE-18489 to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201496#comment-16201496
 ] 

Hudson commented on HBASE-18552:


FAILURE: Integrated in Jenkins build HBase-1.5 #95 (See 
[https://builds.apache.org/job/HBase-1.5/95/])
HBASE-18552 Backport the server side change in HBASE-18489 to branch-1 
(zhangduo: rev ff23e15769013050814b9dc674c65a430f24af36)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java


> Backport the server side change in HBASE-18489 to branch-1
> --
>
> Key: HBASE-18552
> URL: https://issues.apache.org/jira/browse/HBASE-18552
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18552-branch-1-v1.patch, 
> HBASE-18552-branch-1.patch, HBASE-18552-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18489) Expose scan cursor in RawScanResultConsumer

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201489#comment-16201489
 ] 

Hudson commented on HBASE-18489:


FAILURE: Integrated in Jenkins build HBase-1.4 #951 (See 
[https://builds.apache.org/job/HBase-1.4/951/])
HBASE-18552 Backport the server side change in HBASE-18489 to branch-1 
(zhangduo: rev 0fd4da998e3d96f4df414f93e2db70879dc2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Expose scan cursor in RawScanResultConsumer
> ---
>
> Key: HBASE-18489
> URL: https://issues.apache.org/jira/browse/HBASE-18489
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18489-addendum.patch, HBASE-18489-v1.patch, 
> HBASE-18489-v2.patch, HBASE-18489-v2.patch, HBASE-18489.patch
>
>
> The first step of supporting scan cursor for async client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18552) Backport the server side change in HBASE-18489 to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201488#comment-16201488
 ] 

Hudson commented on HBASE-18552:


FAILURE: Integrated in Jenkins build HBase-1.4 #951 (See 
[https://builds.apache.org/job/HBase-1.4/951/])
HBASE-18552 Backport the server side change in HBASE-18489 to branch-1 
(zhangduo: rev 0fd4da998e3d96f4df414f93e2db70879dc2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java


> Backport the server side change in HBASE-18489 to branch-1
> --
>
> Key: HBASE-18552
> URL: https://issues.apache.org/jira/browse/HBASE-18552
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18552-branch-1-v1.patch, 
> HBASE-18552-branch-1.patch, HBASE-18552-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) In-memory compaction/merge should update its TimeRange

2017-10-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Attachment: HBASE-18966.v1.patch

v1
# address Ted's suggestion.

> In-memory compaction/merge should update its TimeRange
> --
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) In-memory compaction/merge should update its TimeRange

2017-10-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Status: Patch Available  (was: Open)

> In-memory compaction/merge should update its TimeRange
> --
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18183) Region interface cleanup for CP expose

2017-10-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201453#comment-16201453
 ] 

Anoop Sam John commented on HBASE-18183:


Exposing metric we dont have.  Unless some one ask for HBase metric within CP, 
lets not give?  Locks are needed as long as we have RP.
Others we have sub tasks.
So MS patch is in now?  Let me see. Ya we will need discuss in another issue.  
This issue any way was on Region.

> Region interface cleanup for CP expose
> --
>
> Key: HBASE-18183
> URL: https://issues.apache.org/jira/browse/HBASE-18183
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18183.patch, HBASE-18183_V2.patch, 
> HBASE-18183_V3.patch, HBASE-18183_V4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-11 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18990:
--
Summary: ServerLoad doesn't override #equals which leads to #equals in 
ClusterStatus always false  (was: ServerLoad doesn't override #equals which 
leads to #equals in ClusterStatus always wrong)

> ServerLoad doesn't override #equals which leads to #equals in ClusterStatus 
> always false
> 
>
> Key: HBASE-18990
> URL: https://issues.apache.org/jira/browse/HBASE-18990
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always wrong

2017-10-11 Thread Reid Chan (JIRA)
Reid Chan created HBASE-18990:
-

 Summary: ServerLoad doesn't override #equals which leads to 
#equals in ClusterStatus always wrong
 Key: HBASE-18990
 URL: https://issues.apache.org/jira/browse/HBASE-18990
 Project: HBase
  Issue Type: Bug
Reporter: Reid Chan
Assignee: Reid Chan
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18989) Polish the compaction related CP hooks

2017-10-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201410#comment-16201410
 ] 

Duo Zhang commented on HBASE-18989:
---

As discussed in HBASE-18906, we should give CP users the ability to know when a 
compaction is end. We already have a CompactionLifeCycleTracker but the problem 
is that it will not notify user if the compaction can not be scheduled.

And also, as we decide not to expose StoreScanner to CP users, then it does not 
make sense to allow CP users to return an InternalScanner before we actually 
create the StoreScanner in our own code. In the example in HBASE-18747 I wrap 
the InternalScanner and then do filtering in the preCompact method. I think 
this is the correct way to do filtering on compaction and flush.

The limitation of this solution is that, we can only remove data when 
compaction or flush. In the old example, we can reset the TTL in ScanInfo to 
include more data.But I think this is acceptable as you can use a longer 
TTL(such as for ever) to include the data, and also set KEEP_DELETE_CELLS to 
true and increase the versions to let compaction and flush give you the data 
you want, and then do filtering.

Another problem maybe performance. When using the original filter or other 
things such as TTL, the StoreScanner may do a seek other than skip if you want 
to jump to the next row or column, but for now you can only do skip. But I 
think this is OK for most cases as usually a row will not be very large. And 
compaction is not on the critical path of normal operation.

Thanks.

> Polish the compaction related CP hooks
> --
>
> Key: HBASE-18989
> URL: https://issues.apache.org/jira/browse/HBASE-18989
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-11 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201403#comment-16201403
 ] 

Jingcheng Du commented on HBASE-18602:
--

Hi [~suxingfate], the patch cannot be applied, you have to rebase the code and 
update your patch. Thanks.
The regions with BOGUS_SERVER_NAME are regarded as misplaced regions, and these 
regions have been handled and re-assigned/moved by the callers, for instance 
HMaster. It is not necessary to un-assign the misplaced regions in 
{{correctAssignments}} method, which means we can removed the unused 
{{misplacedRegions}}.
In the same way, the unassign in FavoredStochasticBalancer#balanceCluster is 
not necessary. And meanwhile in most of load balancer, we only decide how to 
move the regions without doing the real move actions, whereas in 
FavoredStochasticBalancer#balanceCluster we do the real move actions.
Your thoughts? [~chia7712]. Thanks.

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map correctAssignments(
>Map existingAssignments)
>   throws HBaseIOException{
> Map correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18989) Polish the compaction related CP hooks

2017-10-11 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-18989:
-

 Summary: Polish the compaction related CP hooks
 Key: HBASE-18989
 URL: https://issues.apache.org/jira/browse/HBASE-18989
 Project: HBase
  Issue Type: Sub-task
  Components: Compaction, Coprocessors
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.0.0-alpha-4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18879) HBase FilterList cause KeyOnlyFilter not work

2017-10-11 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18879:
-
Attachment: HBASE-18879-HBASE-18410.v2.patch

> HBase FilterList cause KeyOnlyFilter not work
> -
>
> Key: HBASE-18879
> URL: https://issues.apache.org/jira/browse/HBASE-18879
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 1.2.4
> Environment: OS: Red Hat 4.4.7-11
> Hadoop: 2.6.4
> Hbase: 1.2.4
>Reporter: ZHA_Moonlight
>Assignee: Zheng Hu
> Attachments: HBASE-18879-HBASE-18410.v1.patch, 
> HBASE-18879-HBASE-18410.v2.patch
>
>
> when use FilterList and KeyOnlyFilter together, if we put KeyOnlyFilter 
> before FilterList, the KeyOnlyFilter may not work, means it will also grab 
> the cell values:
> {code:java}
> List filters = new ArrayList();
> Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value1"));
> Filter filter2 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value2"));
> filters.add(filter1);
> filters.add(filter2);
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL, 
> new KeyOnlyFilter(),
> new FilterList(Operator.MUST_PASS_ONE, filters));
> {code}
> use the above code as filter to scan a table, it will return the cells with 
> value instead of only return the key,  if we put KeyOnlyFilter after 
> FilterList as following, it works well.
>   
> {code:java}
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL,
> new FilterList(Operator.MUST_PASS_ONE, filters),
> new KeyOnlyFilter());
> {code}
> the cause should due to the following code at hbase-client  FilterList.java
> {code:java}
> @Override
>   
> @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH",
> justification="Intentional")
>   public ReturnCode filterKeyValue(Cell v) throws IOException {
> this.referenceKV = v;
> // Accumulates successive transformation of every filter that includes 
> the Cell:
> Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> LINE1  ReturnCode code = filter.filterKeyValue(v);{color}
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
> LINE2  transformed = filter.transformCell(transformed);{color}
>   
> continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   }
> {code}
> notice the “LINE1”,"LINE2" , first line is a recursive invocation, it will 
> assign a Cell results to the FilterList.transformedKV(we call it A), the 
> results is from the FilterList with 2 SingleColumnValueFilter, so  A with 
> contains the cell value, while the second line with return  A to the var 
> transformed.
> back to the following loop, we can see the FilterList return results is var 
> "transformed " which will override in each loop, so the value is determined 
> by the last filter, so the order of KeyOnlyFilter will impact the results.
> {code:java}
>  Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> ReturnCode code = filter.filterKeyValue(v);
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
>   transformed = filter.transformCell(transformed);
>   continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18171) Scanning cursor for async client

2017-10-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18171:
--
Fix Version/s: 2.0.0

> Scanning cursor for async client
> 
>
> Key: HBASE-18171
> URL: https://issues.apache.org/jira/browse/HBASE-18171
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18171) Scanning cursor for async client

2017-10-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-18171.
---
Resolution: Fixed

Resolve as all sub tasks has been resolved.

> Scanning cursor for async client
> 
>
> Key: HBASE-18171
> URL: https://issues.apache.org/jira/browse/HBASE-18171
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Duo Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18552) Backport the server side change in HBASE-18489 to branch-1

2017-10-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18552:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-1 and branch-1.4.

Thanks [~apurtell] for reviewing.

> Backport the server side change in HBASE-18489 to branch-1
> --
>
> Key: HBASE-18552
> URL: https://issues.apache.org/jira/browse/HBASE-18552
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18552-branch-1-v1.patch, 
> HBASE-18552-branch-1.patch, HBASE-18552-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18552) Backport the server side change in HBASE-18489 to branch-1

2017-10-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18552:
--
Component/s: scan

> Backport the server side change in HBASE-18489 to branch-1
> --
>
> Key: HBASE-18552
> URL: https://issues.apache.org/jira/browse/HBASE-18552
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18552-branch-1-v1.patch, 
> HBASE-18552-branch-1.patch, HBASE-18552-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201388#comment-16201388
 ] 

Sean Busbey commented on HBASE-18411:
-

There's a version in jira for the feature branch, named HBASE-18410. When we 
merge the branch we can update all the jiras that have that fixVersion to point 
at the right places.

> Dividing FiterList  into two separate sub-classes:  FilterListWithOR , 
> FilterListWithAND
> 
>
> Key: HBASE-18411
> URL: https://issues.apache.org/jira/browse/HBASE-18411
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: HBASE-18410
>
> Attachments: HBASE-18411-HBASE-18410.v3.patch, 
> HBASE-18411-HBASE-18410.v3.patch, HBASE-18411.v1.patch, HBASE-18411.v1.patch, 
> HBASE-18411.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-11 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18411:

Fix Version/s: HBASE-18410

> Dividing FiterList  into two separate sub-classes:  FilterListWithOR , 
> FilterListWithAND
> 
>
> Key: HBASE-18411
> URL: https://issues.apache.org/jira/browse/HBASE-18411
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: HBASE-18410
>
> Attachments: HBASE-18411-HBASE-18410.v3.patch, 
> HBASE-18411-HBASE-18410.v3.patch, HBASE-18411.v1.patch, HBASE-18411.v1.patch, 
> HBASE-18411.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10367:
-
Fix Version/s: 2.0.0-alpha-4
   3.0.0
   Status: Patch Available  (was: Open)

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
> Fix For: 3.0.0, 2.0.0-alpha-4
>
> Attachments: HBASE-10367-master.patch
>
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) In-memory compaction/merge should update its TimeRange

2017-10-11 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Status: Open  (was: Patch Available)

> In-memory compaction/merge should update its TimeRange
> --
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18869) Table rpc metrics

2017-10-11 Thread Chenxi Tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chenxi Tong updated HBASE-18869:

Status: Patch Available  (was: Open)

> Table rpc metrics
> -
>
> Key: HBASE-18869
> URL: https://issues.apache.org/jira/browse/HBASE-18869
> Project: HBase
>  Issue Type: Wish
>  Components: metrics
>Affects Versions: 2.0.0-alpha-1
>Reporter: Chenxi Tong
>Priority: Minor
>  Labels: metrics
> Fix For: 2.0.0-alpha-1
>
> Attachments: HBASE-18869.patch
>
>
> Hi all
> In the HBASE-15518, uses the MetricsTableWrapperAggregateImpl to aggregate 
> each region`s 
> totalRequestsCount,readRequestsCount,writeRequestsCount,memstoresSize 
> storeFilesSize, tableSize metrics and exports them to jmx, so i want to use 
> the totalRequestsCount`rate as table`rpc request, but found it is bigger much 
> more than ServerLoad`s requestsPerSecond, i am not sure if it is a correct 
> way to do this, how we can collect each table`s rpc request?
> Best wishes!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17369) Add ACL to the new region server drain related API

2017-10-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201381#comment-16201381
 ] 

Jerry He commented on HBASE-17369:
--

The patch on HBASe-10367 has added the ACL.

> Add ACL to the new region server drain related API
> --
>
> Key: HBASE-17369
> URL: https://issues.apache.org/jira/browse/HBASE-17369
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Jerry He
>Priority: Critical
>
> Add ACL to the new region server drain related API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201377#comment-16201377
 ] 

Jerry He commented on HBASE-10367:
--

Continue the work from HBASE-16010.  See  related comments from that issue.
Attached a patch to do 'decommission'.

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10367:
-
Attachment: HBASE-10367-master.patch

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
> Attachments: HBASE-10367-master.patch
>
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18879) HBase FilterList cause KeyOnlyFilter not work

2017-10-11 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201378#comment-16201378
 ] 

Zheng Hu commented on HBASE-18879:
--

The refactor of FilterList has been committed to branch HBASE-18410 util now,  
and I uploaded HBASE-18879-HBASE-18410.v1.patch  to fix this BUG.   [~pengxu],  
[~anoop.hbase], [~busbey], [~psomogyi] . 

> HBase FilterList cause KeyOnlyFilter not work
> -
>
> Key: HBASE-18879
> URL: https://issues.apache.org/jira/browse/HBASE-18879
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 1.2.4
> Environment: OS: Red Hat 4.4.7-11
> Hadoop: 2.6.4
> Hbase: 1.2.4
>Reporter: ZHA_Moonlight
>Assignee: Zheng Hu
> Attachments: HBASE-18879-HBASE-18410.v1.patch
>
>
> when use FilterList and KeyOnlyFilter together, if we put KeyOnlyFilter 
> before FilterList, the KeyOnlyFilter may not work, means it will also grab 
> the cell values:
> {code:java}
> List filters = new ArrayList();
> Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value1"));
> Filter filter2 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value2"));
> filters.add(filter1);
> filters.add(filter2);
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL, 
> new KeyOnlyFilter(),
> new FilterList(Operator.MUST_PASS_ONE, filters));
> {code}
> use the above code as filter to scan a table, it will return the cells with 
> value instead of only return the key,  if we put KeyOnlyFilter after 
> FilterList as following, it works well.
>   
> {code:java}
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL,
> new FilterList(Operator.MUST_PASS_ONE, filters),
> new KeyOnlyFilter());
> {code}
> the cause should due to the following code at hbase-client  FilterList.java
> {code:java}
> @Override
>   
> @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH",
> justification="Intentional")
>   public ReturnCode filterKeyValue(Cell v) throws IOException {
> this.referenceKV = v;
> // Accumulates successive transformation of every filter that includes 
> the Cell:
> Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> LINE1  ReturnCode code = filter.filterKeyValue(v);{color}
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
> LINE2  transformed = filter.transformCell(transformed);{color}
>   
> continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   }
> {code}
> notice the “LINE1”,"LINE2" , first line is a recursive invocation, it will 
> assign a Cell results to the FilterList.transformedKV(we call it A), the 
> results is from the FilterList with 2 SingleColumnValueFilter, so  A with 
> contains the cell value, while the second line with return  A to the var 
> transformed.
> back to the following loop, we can see the FilterList return results is var 
> "transformed " which will override in each loop, so the value is determined 
> by the last filter, so the order of KeyOnlyFilter will impact the results.
> {code:java}
>  Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> ReturnCode code = filter.filterKeyValue(v);
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
>   transformed = filter.transformCell(transformed);
>   continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> 

[jira] [Updated] (HBASE-18869) Table rpc metrics

2017-10-11 Thread Chenxi Tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chenxi Tong updated HBASE-18869:

Status: Open  (was: Patch Available)

> Table rpc metrics
> -
>
> Key: HBASE-18869
> URL: https://issues.apache.org/jira/browse/HBASE-18869
> Project: HBase
>  Issue Type: Wish
>  Components: metrics
>Affects Versions: 2.0.0-alpha-1
>Reporter: Chenxi Tong
>Priority: Minor
>  Labels: metrics
> Fix For: 2.0.0-alpha-1
>
> Attachments: HBASE-18869.patch
>
>
> Hi all
> In the HBASE-15518, uses the MetricsTableWrapperAggregateImpl to aggregate 
> each region`s 
> totalRequestsCount,readRequestsCount,writeRequestsCount,memstoresSize 
> storeFilesSize, tableSize metrics and exports them to jmx, so i want to use 
> the totalRequestsCount`rate as table`rpc request, but found it is bigger much 
> more than ServerLoad`s requestsPerSecond, i am not sure if it is a correct 
> way to do this, how we can collect each table`s rpc request?
> Best wishes!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18108) Procedure WALs are archived but not cleaned; fix

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201371#comment-16201371
 ] 

Hudson commented on HBASE-18108:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3870 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3870/])
HBASE-18108 Procedure WALs are archived but not cleaned; fix (stack: rev 
023d4f1ae8081da3cb9ff54e6b2e545799704ce7)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/BaseFileCleanerDelegate.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveLogCleaner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureUtil.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveProcedureWALCleaner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/LogCleaner.java


> Procedure WALs are archived but not cleaned; fix
> 
>
> Key: HBASE-18108
> URL: https://issues.apache.org/jira/browse/HBASE-18108
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18108.master.001.patch, 
> HBASE-18108.master.002.patch, HBASE-18108.master.003.patch, 
> HBASE-18108.master.004.patch, HBASE-18108.master.004.patch, 
> HBASE-18108.master.005.patch
>
>
> The Procedure WAL files used to be deleted when done. HBASE-14614 keeps them 
> around in case issue but what is missing is a GC for no-longer-needed WAL 
> files. This one is pretty important.
> From WALProcedureStore Cleaner TODO in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.r2pc835nb7vi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201370#comment-16201370
 ] 

Hudson commented on HBASE-18975:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3870 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3870/])
HBASE-18975 Fix backup / restore hadoop3 incompatibility (Vladimir (tedyu: rev 
c4ced0b3d50002b73c7dc3121b08d97afbe8e97b)
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/mapreduce/MapReduceBackupCopyJob.java


> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18879) HBase FilterList cause KeyOnlyFilter not work

2017-10-11 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18879:
-
Assignee: Zheng Hu
  Status: Patch Available  (was: Open)

> HBase FilterList cause KeyOnlyFilter not work
> -
>
> Key: HBASE-18879
> URL: https://issues.apache.org/jira/browse/HBASE-18879
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 1.2.4
> Environment: OS: Red Hat 4.4.7-11
> Hadoop: 2.6.4
> Hbase: 1.2.4
>Reporter: ZHA_Moonlight
>Assignee: Zheng Hu
> Attachments: HBASE-18879-HBASE-18410.v1.patch
>
>
> when use FilterList and KeyOnlyFilter together, if we put KeyOnlyFilter 
> before FilterList, the KeyOnlyFilter may not work, means it will also grab 
> the cell values:
> {code:java}
> List filters = new ArrayList();
> Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value1"));
> Filter filter2 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value2"));
> filters.add(filter1);
> filters.add(filter2);
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL, 
> new KeyOnlyFilter(),
> new FilterList(Operator.MUST_PASS_ONE, filters));
> {code}
> use the above code as filter to scan a table, it will return the cells with 
> value instead of only return the key,  if we put KeyOnlyFilter after 
> FilterList as following, it works well.
>   
> {code:java}
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL,
> new FilterList(Operator.MUST_PASS_ONE, filters),
> new KeyOnlyFilter());
> {code}
> the cause should due to the following code at hbase-client  FilterList.java
> {code:java}
> @Override
>   
> @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH",
> justification="Intentional")
>   public ReturnCode filterKeyValue(Cell v) throws IOException {
> this.referenceKV = v;
> // Accumulates successive transformation of every filter that includes 
> the Cell:
> Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> LINE1  ReturnCode code = filter.filterKeyValue(v);{color}
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
> LINE2  transformed = filter.transformCell(transformed);{color}
>   
> continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   }
> {code}
> notice the “LINE1”,"LINE2" , first line is a recursive invocation, it will 
> assign a Cell results to the FilterList.transformedKV(we call it A), the 
> results is from the FilterList with 2 SingleColumnValueFilter, so  A with 
> contains the cell value, while the second line with return  A to the var 
> transformed.
> back to the following loop, we can see the FilterList return results is var 
> "transformed " which will override in each loop, so the value is determined 
> by the last filter, so the order of KeyOnlyFilter will impact the results.
> {code:java}
>  Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> ReturnCode code = filter.filterKeyValue(v);
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
>   transformed = filter.transformCell(transformed);
>   continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18879) HBase FilterList cause KeyOnlyFilter not work

2017-10-11 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18879:
-
Summary: HBase FilterList cause KeyOnlyFilter not work  (was: Hbase 
FilterList cause KeyOnlyFilter not work)

> HBase FilterList cause KeyOnlyFilter not work
> -
>
> Key: HBASE-18879
> URL: https://issues.apache.org/jira/browse/HBASE-18879
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 1.2.4
> Environment: OS: Red Hat 4.4.7-11
> Hadoop: 2.6.4
> Hbase: 1.2.4
>Reporter: ZHA_Moonlight
> Attachments: HBASE-18879-HBASE-18410.v1.patch
>
>
> when use FilterList and KeyOnlyFilter together, if we put KeyOnlyFilter 
> before FilterList, the KeyOnlyFilter may not work, means it will also grab 
> the cell values:
> {code:java}
> List filters = new ArrayList();
> Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value1"));
> Filter filter2 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value2"));
> filters.add(filter1);
> filters.add(filter2);
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL, 
> new KeyOnlyFilter(),
> new FilterList(Operator.MUST_PASS_ONE, filters));
> {code}
> use the above code as filter to scan a table, it will return the cells with 
> value instead of only return the key,  if we put KeyOnlyFilter after 
> FilterList as following, it works well.
>   
> {code:java}
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL,
> new FilterList(Operator.MUST_PASS_ONE, filters),
> new KeyOnlyFilter());
> {code}
> the cause should due to the following code at hbase-client  FilterList.java
> {code:java}
> @Override
>   
> @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH",
> justification="Intentional")
>   public ReturnCode filterKeyValue(Cell v) throws IOException {
> this.referenceKV = v;
> // Accumulates successive transformation of every filter that includes 
> the Cell:
> Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> LINE1  ReturnCode code = filter.filterKeyValue(v);{color}
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
> LINE2  transformed = filter.transformCell(transformed);{color}
>   
> continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   }
> {code}
> notice the “LINE1”,"LINE2" , first line is a recursive invocation, it will 
> assign a Cell results to the FilterList.transformedKV(we call it A), the 
> results is from the FilterList with 2 SingleColumnValueFilter, so  A with 
> contains the cell value, while the second line with return  A to the var 
> transformed.
> back to the following loop, we can see the FilterList return results is var 
> "transformed " which will override in each loop, so the value is determined 
> by the last filter, so the order of KeyOnlyFilter will impact the results.
> {code:java}
>  Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> ReturnCode code = filter.filterKeyValue(v);
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
>   transformed = filter.transformCell(transformed);
>   continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18879) Hbase FilterList cause KeyOnlyFilter not work

2017-10-11 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18879:
-
Attachment: HBASE-18879-HBASE-18410.v1.patch

> Hbase FilterList cause KeyOnlyFilter not work
> -
>
> Key: HBASE-18879
> URL: https://issues.apache.org/jira/browse/HBASE-18879
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Affects Versions: 1.2.4
> Environment: OS: Red Hat 4.4.7-11
> Hadoop: 2.6.4
> Hbase: 1.2.4
>Reporter: ZHA_Moonlight
> Attachments: HBASE-18879-HBASE-18410.v1.patch
>
>
> when use FilterList and KeyOnlyFilter together, if we put KeyOnlyFilter 
> before FilterList, the KeyOnlyFilter may not work, means it will also grab 
> the cell values:
> {code:java}
> List filters = new ArrayList();
> Filter filter1 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value1"));
> Filter filter2 = new SingleColumnValueFilter(Bytes.toBytes("cf"), 
> Bytes.toBytes("column1"),
> CompareOp.EQUAL, Bytes.toBytes("value2"));
> filters.add(filter1);
> filters.add(filter2);
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL, 
> new KeyOnlyFilter(),
> new FilterList(Operator.MUST_PASS_ONE, filters));
> {code}
> use the above code as filter to scan a table, it will return the cells with 
> value instead of only return the key,  if we put KeyOnlyFilter after 
> FilterList as following, it works well.
>   
> {code:java}
> FilterList filterListAll = new FilterList(Operator.MUST_PASS_ALL,
> new FilterList(Operator.MUST_PASS_ONE, filters),
> new KeyOnlyFilter());
> {code}
> the cause should due to the following code at hbase-client  FilterList.java
> {code:java}
> @Override
>   
> @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH",
> justification="Intentional")
>   public ReturnCode filterKeyValue(Cell v) throws IOException {
> this.referenceKV = v;
> // Accumulates successive transformation of every filter that includes 
> the Cell:
> Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> LINE1  ReturnCode code = filter.filterKeyValue(v);{color}
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
> LINE2  transformed = filter.transformCell(transformed);{color}
>   
> continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   }
> {code}
> notice the “LINE1”,"LINE2" , first line is a recursive invocation, it will 
> assign a Cell results to the FilterList.transformedKV(we call it A), the 
> results is from the FilterList with 2 SingleColumnValueFilter, so  A with 
> contains the cell value, while the second line with return  A to the var 
> transformed.
> back to the following loop, we can see the FilterList return results is var 
> "transformed " which will override in each loop, so the value is determined 
> by the last filter, so the order of KeyOnlyFilter will impact the results.
> {code:java}
>  Cell transformed = v;
> ReturnCode rc = operator == Operator.MUST_PASS_ONE?
> ReturnCode.SKIP: ReturnCode.INCLUDE;
> int listize = filters.size();
> for (int i = 0; i < listize; i++) {
>   Filter filter = filters.get(i);
>   if (operator == Operator.MUST_PASS_ALL) {
> if (filter.filterAllRemaining()) {
>   return ReturnCode.NEXT_ROW;
> }
> ReturnCode code = filter.filterKeyValue(v);
> switch (code) {
> // Override INCLUDE and continue to evaluate.
> case INCLUDE_AND_NEXT_COL:
>   rc = ReturnCode.INCLUDE_AND_NEXT_COL; // FindBugs 
> SF_SWITCH_FALLTHROUGH
> case INCLUDE:
>   transformed = filter.transformCell(transformed);
>   continue;
> case SEEK_NEXT_USING_HINT:
>   seekHintFilter = filter;
>   return code;
> default:
>   return code;
> }
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201361#comment-16201361
 ] 

Hadoop QA commented on HBASE-18624:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
42s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
43s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 19s{color} | 
{color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 43s{color} | 
{color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 5 new + 358 unchanged - 1 fixed = 
363 total (was 359) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m  
5s{color} | {color:red} The patch generated 3 new + 740 unchanged - 0 fixed = 
743 total (was 740) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  1m 
34s{color} | {color:red} patch has 14 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  2m 
50s{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m 
52s{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  5m  
1s{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
12s{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | 

[jira] [Commented] (HBASE-18873) Hide protobufs in GlobalQuotaSettings

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201360#comment-16201360
 ] 

Hadoop QA commented on HBASE-18873:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 5s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
70m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 
|
| Timed out junit tests | org.apache.hadoop.hbase.regionserver.TestCompaction |
|   | org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries |
|   | org.apache.hadoop.hbase.TestHBaseTestingUtility |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook |
|   | org.apache.hadoop.hbase.wal.TestWALFiltering |
|   | org.apache.hadoop.hbase.regionserver.TestRegionReplicas |
|   | org.apache.hadoop.hbase.quotas.TestRegionSizeUse |
|   | org.apache.hadoop.hbase.regionserver.TestRegionServerAbort |
|   | org.apache.hadoop.hbase.ipc.TestRpcServerSlowConnectionSetup |
|   | org.apache.hadoop.hbase.regionserver.wal.TestLogRollAbort |
|   | org.apache.hadoop.hbase.util.TestHBaseFsckEncryption |
|   | org.apache.hadoop.hbase.TestClusterBootOrder |
|   | org.apache.hadoop.hbase.TestJMXConnectorServer |
|   | org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded |
|   | org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders |
|   

[jira] [Commented] (HBASE-18824) Add meaningful comment to HConstants.LATEST_TIMESTAMP to explain why it is MAX_VALUE

2017-10-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201355#comment-16201355
 ] 

Xiang Li commented on HBASE-18824:
--

[~chia7712], thanks for guide! 
I read the code of {{HRegion#updateDeleteLatestVersionTimeStamp()}} as well as 
{{HRegion#prepareDeleteTimestamps()}}, and step 2 of 
{{HRegion#doMiniBatchMutate()}}.

The code calls prepareDeleteTimestamps() if the mutation is not Put(Could be 
Delete, but could it be Increment or Append?). Loop for each cell list of 
column family, and loop for each cell.

It operates differently in 2 conditions, one is for Type.Delete, the other is 
for Type.DeleteFamily, Type.DeleteFamilyVersion or Type.DeleteColumn. 
* For Type.Delete
** When the size of Get's result is greater than count (to be deleted), the 
code updates the timestamp to server's current time
** When the size of Get's result is equal to count(to be deleted), the code 
updates the timestamp to the latest version of what could be Get (why?)
* For other types (Type.DeleteFamily, Type.DeleteFamilyVersion or 
Type.DeleteColumn), update the timestamp to server's current time when it is 
LATEST_TIMESTAMP. 

Do I get it correctly?

> Add meaningful comment to HConstants.LATEST_TIMESTAMP to explain why it is 
> MAX_VALUE
> 
>
> Key: HBASE-18824
> URL: https://issues.apache.org/jira/browse/HBASE-18824
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18824.master.000.patch, 
> HBASE-18824.master.001.patch
>
>
> Thanks to [Jerry and Chia-Ping Tsai's 
> comments|https://issues.apache.org/jira/browse/HBASE-18824?focusedCommentId=16167392=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16167392]
>  to correct my wrong understanding. 
> The following documentation says that by default(when the timestamp is not 
> specified for Put or Delete), system uses the server's {{currentTimeMillis}}.
> 1. In chapter 27.2.4 Put 
> bq. Doing a put always creates a new version of a cell, at a certain 
> timestamp. {color:#205081}By default the system uses the server’s 
> currentTimeMillis{color}, ...
> 2. In chapter 27.2.5 Delete
> bq. Deletes work by creating tombstone markers. For example, let’s suppose we 
> want to delete a row. For this you can specify a version, or else 
> {color:#205081}by default the currentTimeMillis is used.{color}...
> It seems not consistent with the code. Because in the client side's code, 
> when timestamp is not specified, HConstants.LATEST_TIMESTAMP is used, which 
> is Long.MAX_VALUE, rather than {{System.currentTimeMillis()}}.
> However, the documentation is correct, because on the server side,  timestamp 
> of Put cell with HConstants.LATEST_TIMESTAMP will be replaced with server's 
> {{currentTimeMillis}}.
> So we decide to add more comments to HConstants.LATEST_TIMESTAMP to help the 
> new comers steer clear of the confusion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18411:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-18410. Thanks [~openinx].

And [~busbey], how to set fix versions for this issue? Or is there is label or 
tag or something to indicate that the patch is committed to a feature branch 
HBASE-18410?

Thanks.

> Dividing FiterList  into two separate sub-classes:  FilterListWithOR , 
> FilterListWithAND
> 
>
> Key: HBASE-18411
> URL: https://issues.apache.org/jira/browse/HBASE-18411
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18411-HBASE-18410.v3.patch, 
> HBASE-18411-HBASE-18410.v3.patch, HBASE-18411.v1.patch, HBASE-18411.v1.patch, 
> HBASE-18411.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-11 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201331#comment-16201331
 ] 

Zheng Hu commented on HBASE-18411:
--

Thanks [~psomogyi],   Ping [~busbey],  [~Apache9]

> Dividing FiterList  into two separate sub-classes:  FilterListWithOR , 
> FilterListWithAND
> 
>
> Key: HBASE-18411
> URL: https://issues.apache.org/jira/browse/HBASE-18411
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18411-HBASE-18410.v3.patch, 
> HBASE-18411-HBASE-18410.v3.patch, HBASE-18411.v1.patch, HBASE-18411.v1.patch, 
> HBASE-18411.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18624:
--
Status: Patch Available  (was: In Progress)

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch, 
> HBASE-18624.master.003.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18624:
--
Status: In Progress  (was: Patch Available)

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch, 
> HBASE-18624.master.003.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201325#comment-16201325
 ] 

Zach York commented on HBASE-18624:
---

[~tedyu] latest patch adds return value.

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch, 
> HBASE-18624.master.003.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18624:
--
Attachment: HBASE-18624.master.003.patch

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch, 
> HBASE-18624.master.003.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201317#comment-16201317
 ] 

Duo Zhang commented on HBASE-18747:
---

I think the failed UT is not related. [~stack] [~anoopsamjohn] What do you guy 
think of the new example?

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201316#comment-16201316
 ] 

Hudson commented on HBASE-18874:


FAILURE: Integrated in Jenkins build HBase-1.4 #950 (See 
[https://builds.apache.org/job/HBase-1.4/950/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev 22e2539d0cc4c987e6871a7cd4945f3f25d48774)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> HMaster abort message will be skipped if Throwable is passed null
> -
>
> Key: HBASE-18874
> URL: https://issues.apache.org/jira/browse/HBASE-18874
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18874-branch-1.patch, HBASE-18874.patch
>
>
> In HMaster class, we are logging abort message only in case when Throwable is 
> not null,
> {noformat}
> if (t != null) LOG.fatal(msg, t);
> {noformat}
> We will miss the abort message in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201315#comment-16201315
 ] 

Hudson commented on HBASE-18959:


FAILURE: Integrated in Jenkins build HBase-1.4 #950 (See 
[https://builds.apache.org/job/HBase-1.4/950/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev 22e2539d0cc4c987e6871a7cd4945f3f25d48774)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18552) Backport the server side change in HBASE-18489 to branch-1

2017-10-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201314#comment-16201314
 ] 

Duo Zhang commented on HBASE-18552:
---

OK, Thanks. Let me commit then.

> Backport the server side change in HBASE-18489 to branch-1
> --
>
> Key: HBASE-18552
> URL: https://issues.apache.org/jira/browse/HBASE-18552
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18552-branch-1-v1.patch, 
> HBASE-18552-branch-1.patch, HBASE-18552-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18986) Remove unnecessary null check after CellUtil.cloneQualifier()

2017-10-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18986:
-
Status: Open  (was: Patch Available)

> Remove unnecessary null check after CellUtil.cloneQualifier()
> -
>
> Key: HBASE-18986
> URL: https://issues.apache.org/jira/browse/HBASE-18986
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18986.master.000.patch
>
>
> In master branch,
> {code:title=hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java|borderStyle=solid}
> // From line 2858
> public void prepareDeleteTimestamps(Mutation mutation, Map List> familyMap,
>   byte[] byteNow) throws IOException {
> for (Map.Entry e : familyMap.entrySet()) {
>   // ...
>   for (int i=0; i < listSize; i++) {
> // ...
> if (cell.getTimestamp() == HConstants.LATEST_TIMESTAMP && 
> CellUtil.isDeleteType(cell)) {
>   byte[] qual = CellUtil.cloneQualifier(cell);
>   if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY; // <-- here
>   ...
> {code}
> Might {{if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;}} be removed?
> Could it be null after CellUtil.cloneQualifier()?
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java|borderStyle=solid}
> public static byte[] cloneQualifier(Cell cell){
>   byte[] output = new byte[cell.getQualifierLength()];
>   copyQualifierTo(cell, output, 0);
>   return output;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18986) Remove unnecessary null check after CellUtil.cloneQualifier()

2017-10-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18986:
-
Status: Patch Available  (was: Open)

> Remove unnecessary null check after CellUtil.cloneQualifier()
> -
>
> Key: HBASE-18986
> URL: https://issues.apache.org/jira/browse/HBASE-18986
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18986.master.000.patch
>
>
> In master branch,
> {code:title=hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java|borderStyle=solid}
> // From line 2858
> public void prepareDeleteTimestamps(Mutation mutation, Map List> familyMap,
>   byte[] byteNow) throws IOException {
> for (Map.Entry e : familyMap.entrySet()) {
>   // ...
>   for (int i=0; i < listSize; i++) {
> // ...
> if (cell.getTimestamp() == HConstants.LATEST_TIMESTAMP && 
> CellUtil.isDeleteType(cell)) {
>   byte[] qual = CellUtil.cloneQualifier(cell);
>   if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY; // <-- here
>   ...
> {code}
> Might {{if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;}} be removed?
> Could it be null after CellUtil.cloneQualifier()?
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java|borderStyle=solid}
> public static byte[] cloneQualifier(Cell cell){
>   byte[] output = new byte[cell.getQualifierLength()];
>   copyQualifierTo(cell, output, 0);
>   return output;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18975:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-alpha-4
   Status: Resolved  (was: Patch Available)

> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201288#comment-16201288
 ] 

Hudson commented on HBASE-18975:


FAILURE: Integrated in Jenkins build HBase-2.0 #669 (See 
[https://builds.apache.org/job/HBase-2.0/669/])
HBASE-18975 Fix backup / restore hadoop3 incompatibility (Vladimir (tedyu: rev 
19336cadce4874c172d9ecee89c089a95a8a3cd2)
* (edit) 
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/mapreduce/MapReduceBackupCopyJob.java


> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18108) Procedure WALs are archived but not cleaned; fix

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201289#comment-16201289
 ] 

Hudson commented on HBASE-18108:


FAILURE: Integrated in Jenkins build HBase-2.0 #669 (See 
[https://builds.apache.org/job/HBase-2.0/669/])
HBASE-18108 Procedure WALs are archived but not cleaned; fix (stack: rev 
507a3f94250e7102aa6081b99a658d5b0f9d3991)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/LogCleaner.java
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/BaseFileCleanerDelegate.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveLogCleaner.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveProcedureWALCleaner.java


> Procedure WALs are archived but not cleaned; fix
> 
>
> Key: HBASE-18108
> URL: https://issues.apache.org/jira/browse/HBASE-18108
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18108.master.001.patch, 
> HBASE-18108.master.002.patch, HBASE-18108.master.003.patch, 
> HBASE-18108.master.004.patch, HBASE-18108.master.004.patch, 
> HBASE-18108.master.005.patch
>
>
> The Procedure WAL files used to be deleted when done. HBASE-14614 keeps them 
> around in case issue but what is missing is a GC for no-longer-needed WAL 
> files. This one is pretty important.
> From WALProcedureStore Cleaner TODO in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.r2pc835nb7vi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18951) Use Builder pattern to remove nullable parameters for checkAndXXX methods in RawAsyncTable/AsyncTable interface

2017-10-11 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201273#comment-16201273
 ] 

Appy commented on HBASE-18951:
--

Was in my pipeline for review, but got delayed. Sorry for late review.
Great pattern! I like it [~Apache9].

During review, I was wondering for 10 min, there is *if* in "ifNotExists" and 
"ifMatches" but i don't see any booleans/conditions anywhere :)
Thinking about it, probably something like {{setConditionValueEquals}} and 
{{setConditionColumnNotExists}} would have suggested that we are just setting 
conditions/requirements here. The looking at builder code, and seeing lack of 
any branching (which any *if* makes us think) would have suggested that - oh, 
these {{op}} and {{value}} assignments are somehow setting up params for 
conditions which are evaluated later.



> Use Builder pattern to remove nullable parameters for checkAndXXX methods in 
> RawAsyncTable/AsyncTable interface
> ---
>
> Key: HBASE-18951
> URL: https://issues.apache.org/jira/browse/HBASE-18951
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18951-v1.patch, HBASE-18951.patch
>
>
> As Optional is not supposed to be used as method parameters but we do not 
> want nullable parameters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201257#comment-16201257
 ] 

Hadoop QA commented on HBASE-18775:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
50s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} HBASE-18477 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} HBASE-18477 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} HBASE-18477 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} HBASE-18477 passed {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  5m 
40s{color} | {color:red} branch has 13 errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} HBASE-18477 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HBASE-18477 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  3m 
52s{color} | {color:red} patch has 13 errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
40m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 20s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestCheckTestClasses |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891576/HBASE-18775.HBASE-18477.005.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux fb6facace0f3 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality 

[jira] [Commented] (HBASE-18961) doMiniBatchMutate() is big, split it into smaller methods

2017-10-11 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201195#comment-16201195
 ] 

Appy commented on HBASE-18961:
--

Posting improvement suggestions from HBASE-18960 here since we don't want to 
block that and keep things moving:
- This is used so often. Probably add a function 
{{batchOp.isOperationPending(i)}} for it.
{noformat}
if (batchOp.retCodeDetails[lastIndexExclusive].getOperationStatusCode() != 
OperationStatusCode.NOT_RUN) {
{noformat}
- "writeEntry" only seems to be used for non-reply case. Can we rename it to 
make it explicit? And return null from doWALAppend when it's replay mode?
- In all earlier cases of doWALAppend(), WALEdit was checked to be non-empty. 
Make keep that invariant (and even add precondition check for it in the 
function). Seeing {{if (walEdit.isEmpty()}} in the function, it made me search 
everywhere what if it was empty.
- move writeRequestsCount to doMiniBatchMutation. Right now, if the operation 
actually fails with exception, we are still incrementing that counter.
- In batchMutate(), can we move initialization section out of while(isDone()) 
loop.

> doMiniBatchMutate() is big, split it into smaller methods
> -
>
> Key: HBASE-18961
> URL: https://issues.apache.org/jira/browse/HBASE-18961
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0-alpha-3
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0-alpha-4
>
>
> Split doMiniBatchMutate() and improve readability.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201180#comment-16201180
 ] 

Ted Yu commented on HBASE-18624:


bq. incrementing a counter for each time a NotServingRegionException is thrown

Yes.

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18411) Dividing FiterList into two separate sub-classes: FilterListWithOR , FilterListWithAND

2017-10-11 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201175#comment-16201175
 ] 

Peter Somogyi commented on HBASE-18411:
---

Nothing from my side, I think it can be merged.

> Dividing FiterList  into two separate sub-classes:  FilterListWithOR , 
> FilterListWithAND
> 
>
> Key: HBASE-18411
> URL: https://issues.apache.org/jira/browse/HBASE-18411
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Attachments: HBASE-18411-HBASE-18410.v3.patch, 
> HBASE-18411-HBASE-18410.v3.patch, HBASE-18411.v1.patch, HBASE-18411.v1.patch, 
> HBASE-18411.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He reassigned HBASE-10367:


Assignee: Jerry He

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-16010) Put draining function through Admin API

2017-10-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-16010.
--
Resolution: Fixed

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16010-v3.patch, hbase-16010-v1.patch, 
> hbase-16010-v2.patch
>
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-16010) Put draining function through Admin API

2017-10-11 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He reassigned HBASE-16010:


Assignee: Matt Warhaftig  (was: Jerry He)

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16010-v3.patch, hbase-16010-v1.patch, 
> hbase-16010-v2.patch
>
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16010) Put draining function through Admin API

2017-10-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201166#comment-16201166
 ] 

Jerry He commented on HBASE-16010:
--

I am going to close this ticket and post a patch on HBASE-10367, which will 
modify the API to decommission region servers (mark the drain and move off the 
regions).

> Put draining function through Admin API
> ---
>
> Key: HBASE-16010
> URL: https://issues.apache.org/jira/browse/HBASE-16010
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16010-v3.patch, hbase-16010-v1.patch, 
> hbase-16010-v2.patch
>
>
> Currently, there is no Amdin API for draining function. Client has to 
> interact directly with Zookeeper draining node to add and remove draining 
> servers.
> For example, in draining_servers.rb:
> {code}
>   zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
> "draining_servers", nil)
>   parentZnode = zkw.drainingZNode
>   begin
> for server in servers
>   node = ZKUtil.joinZNode(parentZnode, server)
>   ZKUtil.createAndFailSilent(zkw, node)
> end
>   ensure
> zkw.close()
>   end
> {code}
> This is not good in cases like secure clusters with protected Zookeeper nodes.
> Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201162#comment-16201162
 ] 

Zach York commented on HBASE-18624:
---

Ajay is out on vacation so I will be picking this up.

[~tedyu] Are you talking about incrementing a counter for each time a 
NotServingRegionException is thrown below? Another thing I thought of is that 
we could expose how many blocks are evicted as the return of this command 
(since that is already exposed via the internal function)

+  /**
+   * {@inheritDoc}
+   */
+  @Override
+  public void clearBlockCache(final TableName tableName) throws IOException {
+checkTableExists(tableName);
+List> pairs =
+  MetaTableAccessor.getTableRegionsAndLocations(connection, tableName);
+for (Pair pair: pairs) {
+  if (pair.getFirst().isOffline()) continue;
+  if (pair.getSecond() == null) continue;
+  try {
+clearBlockCache(pair.getSecond(), pair.getFirst());
+  } catch (NotServingRegionException e) {
//You want to keep track of how many of these there are?

+if (LOG.isDebugEnabled()) {
+  LOG.debug("Trying to clear block cache for " + pair.getFirst() + ": 
" +
+StringUtils.stringifyException(e));
+}
+  }
+}
+  }

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201161#comment-16201161
 ] 

Hudson commented on HBASE-18959:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #235 (See 
[https://builds.apache.org/job/HBase-1.3-IT/235/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev cdbda777f64a652139339b2024e0b7b1397f5f37)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18624) Added support for clearing BlockCache based on table name

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York reassigned HBASE-18624:
-

Assignee: Zach York  (was: Ajay Jadhav)

> Added support for clearing BlockCache based on table name
> -
>
> Key: HBASE-18624
> URL: https://issues.apache.org/jira/browse/HBASE-18624
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Ajay Jadhav
>Assignee: Zach York
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18624.branch-1.001.patch, 
> HBASE-18624.master.001.patch, HBASE-18624.master.002.patch
>
>
> Bulk loading the primary HBase cluster triggers a lot of compactions 
> resulting in archival/ creation
> of multiple HFiles. This process will cause a lot of items to become stale in 
> replica’s BlockCache.
> This patch will help users to clear the block cache for a given table by 
> either using shell or API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201134#comment-16201134
 ] 

Hudson commented on HBASE-18959:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #320 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/320/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev cdbda777f64a652139339b2024e0b7b1397f5f37)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201135#comment-16201135
 ] 

Hudson commented on HBASE-18874:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #320 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/320/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev cdbda777f64a652139339b2024e0b7b1397f5f37)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> HMaster abort message will be skipped if Throwable is passed null
> -
>
> Key: HBASE-18874
> URL: https://issues.apache.org/jira/browse/HBASE-18874
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18874-branch-1.patch, HBASE-18874.patch
>
>
> In HMaster class, we are logging abort message only in case when Throwable is 
> not null,
> {noformat}
> if (t != null) LOG.fatal(msg, t);
> {noformat}
> We will miss the abort message in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16338) update jackson to 2.y

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201140#comment-16201140
 ] 

Hadoop QA commented on HBASE-16338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
7s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded hbase-shaded/hbase-shaded-mapreduce . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hbase-mapreduce in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hbase-it in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} rubocop {color} | {color:green}  0m  
5s{color} | {color:green} There were no new rubocop issues. {color} |
| {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green}  0m  
1s{color} | {color:green} There were no new ruby-lint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 5s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
43m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded hbase-shaded/hbase-shaded-mapreduce . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} 

[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201133#comment-16201133
 ] 

Hudson commented on HBASE-18874:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #306 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/306/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev cdbda777f64a652139339b2024e0b7b1397f5f37)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> HMaster abort message will be skipped if Throwable is passed null
> -
>
> Key: HBASE-18874
> URL: https://issues.apache.org/jira/browse/HBASE-18874
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18874-branch-1.patch, HBASE-18874.patch
>
>
> In HMaster class, we are logging abort message only in case when Throwable is 
> not null,
> {noformat}
> if (t != null) LOG.fatal(msg, t);
> {noformat}
> We will miss the abort message in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201132#comment-16201132
 ] 

Hudson commented on HBASE-18959:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #306 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/306/])
HBASE-18959 Backport HBASE-18874 (HMaster abort message will be skipped (tedyu: 
rev cdbda777f64a652139339b2024e0b7b1397f5f37)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18273) hbase_rotate_log in hbase-daemon.sh script not working for some JDK

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201131#comment-16201131
 ] 

Hadoop QA commented on HBASE-18273:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 7s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
40m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18273 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874799/HBASE-18273.2.patch |
| Optional Tests |  asflicense  shadedjars  shellcheck  shelldocs  |
| uname | Linux 1e4059c3a53d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 023d4f1 |
| shellcheck | v0.4.6 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9052/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> hbase_rotate_log in hbase-daemon.sh script not working for some JDK
> ---
>
> Key: HBASE-18273
> URL: https://issues.apache.org/jira/browse/HBASE-18273
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Fangyuan Deng
>Assignee: Fangyuan Deng
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-18273.0.patch, HBASE-18273.1.patch, 
> HBASE-18273.2.patch
>
>
> When restarting a hbase process,  hbase_rotate_log $HBASE_LOGGC will rotate 
> GC logs.
> the code looks like this,
>  if [ -f "$log" ]; then # rotate logs
> while [ $num -gt 1 ]; do
> prev=`expr $num - 1`
> [ -f "$log.$prev" ] && mv -f "$log.$prev" "$log.$num"
> num=$prev
> done
> But, some version JDK will add a suffix (.0) to the gc file, like 
> hbase-xxx.gc.0,  rather than hbase-xxx.gc.
> So I add a check before rotate,
>  if [ ! -f "$log" ]; then #for some jdk, gc log has a postfix 0
>   if [ -f "$log.0" ]; then
> mv -f "$log.0" "$log";
>   fi
> fi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18840) Add functionality to refresh meta table at master startup

2017-10-11 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201128#comment-16201128
 ] 

Zach York commented on HBASE-18840:
---

[~stack] What do you think of the proposed approach? 

The use case here would starting a cluster where there is no meta table, but 
you have the store files (this is the case when you are starting a Read Replica 
Cluster). Without this, no existing regions will be assigned, but will be 
present in a hbase shell 'list' command (since this command is actually doing a 
similar thing as my patch).

> Add functionality to refresh meta table at master startup
> -
>
> Key: HBASE-18840
> URL: https://issues.apache.org/jira/browse/HBASE-18840
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18840.HBASE-18477.001.patch, 
> HBASE-18840.HBASE-18477.002.patch, HBASE-18840.HBASE-18477.003.patch
>
>
> If a HBase cluster’s hbase:meta table is deleted or a cluster is started with 
> a new meta table, HBase needs the functionality to synchronize it’s metadata 
> from Storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18860) Determine where to move ReadReplicaClustersTableNameUtils

2017-10-11 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201106#comment-16201106
 ] 

Zach York commented on HBASE-18860:
---

[~stack] The most intrusive patch in terms of internals is HBASE-18775. The 
other stuff can be in a separate module, but I don't know if there will be 
enough code to justify that.

I'm also struggling with naming. I was going to create a package under o.a.h.h 
in hbase-server, but couldn't think of the proper naming structure 
(o.a.h.h.readreplicaclusters seems way too verbose). Do you have any ideas? I 
think the naming would be easily carried through for either the module or 
package.

> Determine where to move ReadReplicaClustersTableNameUtils
> -
>
> Key: HBASE-18860
> URL: https://issues.apache.org/jira/browse/HBASE-18860
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
>Priority: Minor
>
> In HBASE-18773, Stack brought up the (good) point that 
> ReadReplicaClustersTableNameUtils does not really belong in the hbase-common 
> package. This follow-up JIRA is to determine the correct place for 
> Read-Replica code and to move this utils class there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18873) Hide protobufs in GlobalQuotaSettings

2017-10-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18873:
---
Status: Patch Available  (was: Open)

> Hide protobufs in GlobalQuotaSettings
> -
>
> Key: HBASE-18873
> URL: https://issues.apache.org/jira/browse/HBASE-18873
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18873.001.branch-2.patch
>
>
> HBASE-18807 cleaned up direct protobuf use in the Coprocessor APIs for 
> quota-related functions. However, one new POJO introduced to hide these 
> protocol buffers still exposes PBs via some methods.
> We should try to hide those as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18840) Add functionality to refresh meta table at master startup

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18840:
--
Status: In Progress  (was: Patch Available)

> Add functionality to refresh meta table at master startup
> -
>
> Key: HBASE-18840
> URL: https://issues.apache.org/jira/browse/HBASE-18840
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18840.HBASE-18477.001.patch, 
> HBASE-18840.HBASE-18477.002.patch, HBASE-18840.HBASE-18477.003.patch
>
>
> If a HBase cluster’s hbase:meta table is deleted or a cluster is started with 
> a new meta table, HBase needs the functionality to synchronize it’s metadata 
> from Storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18840) Add functionality to refresh meta table at master startup

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18840:
--
Status: Patch Available  (was: In Progress)

> Add functionality to refresh meta table at master startup
> -
>
> Key: HBASE-18840
> URL: https://issues.apache.org/jira/browse/HBASE-18840
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18840.HBASE-18477.001.patch, 
> HBASE-18840.HBASE-18477.002.patch, HBASE-18840.HBASE-18477.003.patch
>
>
> If a HBase cluster’s hbase:meta table is deleted or a cluster is started with 
> a new meta table, HBase needs the functionality to synchronize it’s metadata 
> from Storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18873) Hide protobufs in GlobalQuotaSettings

2017-10-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18873:
---
Attachment: HBASE-18873.001.branch-2.patch

.001 This is an attempt to do what I outlined earlier. However, it's a very 
marginal gain.

At the end of the day, we would have to introduce a significant code change 
that permeates all of the quota-persistence server-side (MasterQuotaManager, 
QuotaUtil, and QuotaTableUtil) to encapsulate all of the PB logic. This at 
least doesn't expose users who might get a GlobalQuotaSettings object to the 
protobufs, but only lets them merge other QuotaSettings implementations into it 
(which may be ok?).

> Hide protobufs in GlobalQuotaSettings
> -
>
> Key: HBASE-18873
> URL: https://issues.apache.org/jira/browse/HBASE-18873
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18873.001.branch-2.patch
>
>
> HBASE-18807 cleaned up direct protobuf use in the Coprocessor APIs for 
> quota-related functions. However, one new POJO introduced to hide these 
> protocol buffers still exposes PBs via some methods.
> We should try to hide those as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18959:
---
   Resolution: Fixed
Fix Version/s: 1.3.2
   1.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Pankaj

> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18959) Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1

2017-10-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201087#comment-16201087
 ] 

Ted Yu commented on HBASE-18959:


Build #94 pass for https://builds.apache.org/job/HBase-1.5

> Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is 
> passed null) to branch-1
> 
>
> Key: HBASE-18959
> URL: https://issues.apache.org/jira/browse/HBASE-18959
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.2, 1.5.0, 1.2.7
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-18959-branch-1.patch
>
>
> Backport HBASE-18874 to branch-1/1.4/1.3/1.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18775:
--
Status: Patch Available  (was: In Progress)

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18775:
--
Status: In Progress  (was: Patch Available)

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-10-11 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-18775:
--
Attachment: HBASE-18775.HBASE-18477.005.patch

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch, HBASE-18775.HBASE-18477.005.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18987) Raise value of HConstants#MAX_ROW_LENGTH

2017-10-11 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201067#comment-16201067
 ] 

Esteban Gutierrez commented on HBASE-18987:
---

[~mdrob],  {{nameStr}} is being used.
{code}
String nameStr = Bytes.toString(name);
assertTrue(nameStr.length() <= HConstants.MAX_ROW_LENGTH);
{code}


> Raise value of HConstants#MAX_ROW_LENGTH
> 
>
> Key: HBASE-18987
> URL: https://issues.apache.org/jira/browse/HBASE-18987
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Minor
> Attachments: HBASE-18987.master.001.patch, 
> HBASE-18987.master.002.patch
>
>
> Short.MAX_VALUE hasn't been a problem for a long time but one of our 
> customers ran into an  edgy case when the midKey used for the split point was 
> very close to Short.MAX_VALUE. When the split is submitted, we attempt to 
> create the new two daughter regions and we name those regions via 
> {{HRegionInfo.createRegionName()}} in order to be added to META. 
> Unfortunately, since {{HRegionInfo.createRegionName()}} uses midKey as the 
> startKey {{Put}} will fail since the row key length will now fail checkRow 
> and thus causing the split to fail.
> I tried a couple of alternatives to address this problem, e.g. truncating the 
> startKey. But the number of changes in the code doesn't justify for this edge 
> condition. Since we already use {{Integer.MAX_VALUE - 1}} for 
> {{HConstants#MAXIMUM_VALUE_LENGTH}} it should be ok to use the same limit for 
> the maximum row key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18988) Add release managers to reference guide

2017-10-11 Thread Peter Somogyi (JIRA)
Peter Somogyi created HBASE-18988:
-

 Summary: Add release managers to reference guide
 Key: HBASE-18988
 URL: https://issues.apache.org/jira/browse/HBASE-18988
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Peter Somogyi
Priority: Trivial


Reference guide lists release managers only up to version 1.3. We should have a 
complete list there.
http://hbase.apache.org/book.html#_release_managers



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17703) TestThriftServerCmdLine is flaky in master branch

2017-10-11 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201043#comment-16201043
 ] 

Peter Somogyi commented on HBASE-17703:
---

I ran into the same error for a branch-1.3 backport with HBASE-18967.
https://builds.apache.org/job/PreCommit-HBASE-Build/9050/artifact/patchprocess/patch-unit-hbase-thrift.txt

Should we backport this fix to previous branches?

CC: [~chia7712], [~mantonov]

{code}
---
 T E S T S
---
Running org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine
Running org.apache.hadoop.hbase.thrift.TestThriftServer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.878 sec - in 
org.apache.hadoop.hbase.thrift.TestThriftServer
Running org.apache.hadoop.hbase.thrift.TestCallQueue
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.558 sec - in 
org.apache.hadoop.hbase.thrift.TestCallQueue
Running org.apache.hadoop.hbase.thrift.TestThriftHttpServer
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 21.338 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.thrift.TestThriftHttpServer
testRunThriftServer(org.apache.hadoop.hbase.thrift.TestThriftHttpServer)  Time 
elapsed: 10.107 sec  <<< ERROR!
java.lang.Exception: java.net.BindException: Port in use: 0.0.0.0:9095
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.stopHttpServerThread(TestThriftHttpServer.java:200)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.runThriftServer(TestThriftHttpServer.java:151)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.testRunThriftServer(TestThriftHttpServer.java:126)
Caused by: java.net.BindException: Port in use: 0.0.0.0:9095
at 
org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1017)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:953)
at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:91)
at 
org.apache.hadoop.hbase.thrift.ThriftServer.doMain(ThriftServer.java:104)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer$1.run(TestThriftHttpServer.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1012)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:953)
at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:91)
at 
org.apache.hadoop.hbase.thrift.ThriftServer.doMain(ThriftServer.java:104)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer$1.run(TestThriftHttpServer.java:94)
at java.lang.Thread.run(Thread.java:748)

testRunThriftServerWithHeaderBufferLength(org.apache.hadoop.hbase.thrift.TestThriftHttpServer)
  Time elapsed: 10.053 sec  <<< ERROR!
java.lang.Exception: java.net.BindException: Port in use: 0.0.0.0:9095
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.stopHttpServerThread(TestThriftHttpServer.java:200)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.runThriftServer(TestThriftHttpServer.java:151)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer.testRunThriftServerWithHeaderBufferLength(TestThriftHttpServer.java:113)
Caused by: java.net.BindException: Port in use: 0.0.0.0:9095
at 
org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1017)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:953)
at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:91)
at 
org.apache.hadoop.hbase.thrift.ThriftServer.doMain(ThriftServer.java:104)
at 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer$1.run(TestThriftHttpServer.java:94)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1012)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:953)
at 

[jira] [Commented] (HBASE-18273) hbase_rotate_log in hbase-daemon.sh script not working for some JDK

2017-10-11 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201035#comment-16201035
 ] 

Peter Somogyi commented on HBASE-18273:
---

+1
I tested the code part and it looks good. Patch is still applicable to master 
branch.

> hbase_rotate_log in hbase-daemon.sh script not working for some JDK
> ---
>
> Key: HBASE-18273
> URL: https://issues.apache.org/jira/browse/HBASE-18273
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Fangyuan Deng
>Assignee: Fangyuan Deng
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-18273.0.patch, HBASE-18273.1.patch, 
> HBASE-18273.2.patch
>
>
> When restarting a hbase process,  hbase_rotate_log $HBASE_LOGGC will rotate 
> GC logs.
> the code looks like this,
>  if [ -f "$log" ]; then # rotate logs
> while [ $num -gt 1 ]; do
> prev=`expr $num - 1`
> [ -f "$log.$prev" ] && mv -f "$log.$prev" "$log.$num"
> num=$prev
> done
> But, some version JDK will add a suffix (.0) to the gc file, like 
> hbase-xxx.gc.0,  rather than hbase-xxx.gc.
> So I add a check before rotate,
>  if [ ! -f "$log" ]; then #for some jdk, gc log has a postfix 0
>   if [ -f "$log.0" ]; then
> mv -f "$log.0" "$log";
>   fi
> fi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18108) Procedure WALs are archived but not cleaned; fix

2017-10-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201027#comment-16201027
 ] 

stack commented on HBASE-18108:
---

I see it now. My fault. Thanks [~psomogyi]

> Procedure WALs are archived but not cleaned; fix
> 
>
> Key: HBASE-18108
> URL: https://issues.apache.org/jira/browse/HBASE-18108
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18108.master.001.patch, 
> HBASE-18108.master.002.patch, HBASE-18108.master.003.patch, 
> HBASE-18108.master.004.patch, HBASE-18108.master.004.patch, 
> HBASE-18108.master.005.patch
>
>
> The Procedure WAL files used to be deleted when done. HBASE-14614 keeps them 
> around in case issue but what is missing is a GC for no-longer-needed WAL 
> files. This one is pretty important.
> From WALProcedureStore Cleaner TODO in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.r2pc835nb7vi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18108) Procedure WALs are archived but not cleaned; fix

2017-10-11 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201004#comment-16201004
 ] 

Peter Somogyi commented on HBASE-18108:
---

Thanks for the reviews everyone!
I put the new config value to the release notes. Do I need to modify it?

> Procedure WALs are archived but not cleaned; fix
> 
>
> Key: HBASE-18108
> URL: https://issues.apache.org/jira/browse/HBASE-18108
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18108.master.001.patch, 
> HBASE-18108.master.002.patch, HBASE-18108.master.003.patch, 
> HBASE-18108.master.004.patch, HBASE-18108.master.004.patch, 
> HBASE-18108.master.005.patch
>
>
> The Procedure WAL files used to be deleted when done. HBASE-14614 keeps them 
> around in case issue but what is missing is a GC for no-longer-needed WAL 
> files. This one is pretty important.
> From WALProcedureStore Cleaner TODO in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.r2pc835nb7vi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18108) Procedure WALs are archived but not cleaned; fix

2017-10-11 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18108:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed. Thanks [~psomogyi] I was worried this was not going to get done. Nice 
one.

Usually we mention config names in release note. FYI.

> Procedure WALs are archived but not cleaned; fix
> 
>
> Key: HBASE-18108
> URL: https://issues.apache.org/jira/browse/HBASE-18108
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18108.master.001.patch, 
> HBASE-18108.master.002.patch, HBASE-18108.master.003.patch, 
> HBASE-18108.master.004.patch, HBASE-18108.master.004.patch, 
> HBASE-18108.master.005.patch
>
>
> The Procedure WAL files used to be deleted when done. HBASE-14614 keeps them 
> around in case issue but what is missing is a GC for no-longer-needed WAL 
> files. This one is pretty important.
> From WALProcedureStore Cleaner TODO in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.r2pc835nb7vi



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18967) Backport HBASE-17181 to branch-1.3

2017-10-11 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200978#comment-16200978
 ] 

Peter Somogyi commented on HBASE-18967:
---

[~chia7712], these test failures are strange. On branch-2+ there is a fix which 
I think solves this failure: HBASE-17703
What do you think?

> Backport HBASE-17181 to branch-1.3
> --
>
> Key: HBASE-18967
> URL: https://issues.apache.org/jira/browse/HBASE-18967
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Peter Somogyi
> Fix For: 1.3.2
>
> Attachments: HBASE-18967.branch-1.3.001.patch, 
> HBASE-18967.branch-1.3.001.patch, HBASE-18967.branch-1.3.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18967) Backport HBASE-17181 to branch-1.3

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200974#comment-16200974
 ] 

Hadoop QA commented on HBASE-18967:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 56s{color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.thrift.TestThriftHttpServer |
| Timed out junit tests | 
org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine |

[jira] [Commented] (HBASE-18667) Disable error-prone for hbase-protocol-shaded

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200967#comment-16200967
 ] 

Hadoop QA commented on HBASE-18667:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  9m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
41m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891555/HBASE-18667.patch |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  xml  compile  
|
| uname | Linux 737f41685084 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c4ced0b |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9049/testReport/ |
| modules | C: hbase-protocol-shaded hbase-protocol U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9049/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Disable error-prone for hbase-protocol-shaded
> -
>
>   

[jira] [Commented] (HBASE-18775) Add a Global Read-Only property to turn off all writes for the cluster

2017-10-11 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200957#comment-16200957
 ] 

Zach York commented on HBASE-18775:
---

Sorry for the delay, I was out for a bit. I'll be resuming work on this shortly.

[~ashish singhi] I didn't consider snapshots because they aren't as common for 
our use case, but it is an interesting point. Should snapshots be disabled? I 
looked into the code and it looks pretty easy to do that. Do you think 
conceptually it makes sense to disable them on read replica clusters?

[~stack] I did some investigation and I agree that we can get rid of the 
'global' keyword and just use hbase.readonly. I'll implement that and your 
other suggestions and put up another diff.

> Add a Global Read-Only property to turn off all writes for the cluster
> --
>
> Key: HBASE-18775
> URL: https://issues.apache.org/jira/browse/HBASE-18775
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, regionserver
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-18775.HBASE-18477.001.patch, 
> HBASE-18775.HBASE-18477.002.patch, HBASE-18775.HBASE-18477.003.patch, 
> HBASE-18775.HBASE-18477.004.patch
>
>
> As part of HBASE-18477, we need a way to turn off all modification for a 
> cluster. This patch extends the read only mode used by replication to disable 
> all data and metadata operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17130) Add support to specify an arbitrary number of reducers when writing HFiles for bulk load

2017-10-11 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200950#comment-16200950
 ] 

Biju Nair commented on HBASE-17130:
---

Hi [~esteban], would like to know whether you had a patch for this ticket? This 
change will be of great help for our bulk load process.

> Add support to specify an arbitrary number of reducers when writing HFiles 
> for bulk load
> 
>
> Key: HBASE-17130
> URL: https://issues.apache.org/jira/browse/HBASE-17130
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>
> From the discussion from HBASE-16894 there is a set of use cases where 
> writing to multiple regions in a single reducer can be helpful to reduce the 
> overhead of MR jobs when a large number of regions exist in an HBase cluster 
> and some regions can present a data skew, e.g. 100s or 1000s of regions with 
> a very small number of rows vs. regions with 10s or millions or rows as part 
> of the same job. And merging regions is not an option for the use case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-18975:
--
Status: Patch Available  (was: Open)

> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-18975:
--
Attachment: (was: HBASE-18975-v3.patch)

> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-18975:
--
Attachment: HBASE-18975-v3.patch

> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18975) Fix backup / restore hadoop3 incompatibility

2017-10-11 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-18975:
--
Status: Open  (was: Patch Available)

> Fix backup / restore hadoop3 incompatibility
> 
>
> Key: HBASE-18975
> URL: https://issues.apache.org/jira/browse/HBASE-18975
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Attachments: HBASE-18975-v1.patch, HBASE-18975-v2.patch, 
> HBASE-18975-v3.patch, testIncrementalBackup-output.tar.gz
>
>
> Due to changes in hadoop 3, reflection in BackupDistCp is broken
> {code}
> java.lang.NoSuchFieldException: inputOptions
>   at java.lang.Class.getDeclaredField(Class.java:2070)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:168)
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200937#comment-16200937
 ] 

Hadoop QA commented on HBASE-18602:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HBASE-18602 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881922/HBASE-18602-master-v1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9051/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map correctAssignments(
>Map existingAssignments)
>   throws HBaseIOException{
> Map correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18987) Raise value of HConstants#MAX_ROW_LENGTH

2017-10-11 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200932#comment-16200932
 ] 

Mike Drob commented on HBASE-18987:
---

{code}
+String nameStr = Bytes.toString(name);
{code}
This one is unused too, I should have caught it the first time.

+1 assuming tests pass otherwise

> Raise value of HConstants#MAX_ROW_LENGTH
> 
>
> Key: HBASE-18987
> URL: https://issues.apache.org/jira/browse/HBASE-18987
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Minor
> Attachments: HBASE-18987.master.001.patch, 
> HBASE-18987.master.002.patch
>
>
> Short.MAX_VALUE hasn't been a problem for a long time but one of our 
> customers ran into an  edgy case when the midKey used for the split point was 
> very close to Short.MAX_VALUE. When the split is submitted, we attempt to 
> create the new two daughter regions and we name those regions via 
> {{HRegionInfo.createRegionName()}} in order to be added to META. 
> Unfortunately, since {{HRegionInfo.createRegionName()}} uses midKey as the 
> startKey {{Put}} will fail since the row key length will now fail checkRow 
> and thus causing the split to fail.
> I tried a couple of alternatives to address this problem, e.g. truncating the 
> startKey. But the number of changes in the code doesn't justify for this edge 
> condition. Since we already use {{Integer.MAX_VALUE - 1}} for 
> {{HConstants#MAXIMUM_VALUE_LENGTH}} it should be ok to use the same limit for 
> the maximum row key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18966) In-memory compaction/merge should update its TimeRange

2017-10-11 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200900#comment-16200900
 ] 

Chia-Ping Tsai commented on HBASE-18966:


bq. Can this be handled depending on the type of Segment ?
Good idea! Will address it in next patch. Thank! Ted

> In-memory compaction/merge should update its TimeRange
> --
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18987) Raise value of HConstants#MAX_ROW_LENGTH

2017-10-11 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-18987:
--
Attachment: HBASE-18987.master.002.patch

Yeah, the md5HashInHex stuff is remanent of the previous attempt. Also, 
addressed the test case suggestion. Thanks for the review [~mdrob]

> Raise value of HConstants#MAX_ROW_LENGTH
> 
>
> Key: HBASE-18987
> URL: https://issues.apache.org/jira/browse/HBASE-18987
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Minor
> Attachments: HBASE-18987.master.001.patch, 
> HBASE-18987.master.002.patch
>
>
> Short.MAX_VALUE hasn't been a problem for a long time but one of our 
> customers ran into an  edgy case when the midKey used for the split point was 
> very close to Short.MAX_VALUE. When the split is submitted, we attempt to 
> create the new two daughter regions and we name those regions via 
> {{HRegionInfo.createRegionName()}} in order to be added to META. 
> Unfortunately, since {{HRegionInfo.createRegionName()}} uses midKey as the 
> startKey {{Put}} will fail since the row key length will now fail checkRow 
> and thus causing the split to fail.
> I tried a couple of alternatives to address this problem, e.g. truncating the 
> startKey. But the number of changes in the code doesn't justify for this edge 
> condition. Since we already use {{Integer.MAX_VALUE - 1}} for 
> {{HConstants#MAXIMUM_VALUE_LENGTH}} it should be ok to use the same limit for 
> the maximum row key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18602) rsgroup cleanup unassign code

2017-10-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18602:
---
Status: Patch Available  (was: Open)

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map correctAssignments(
>Map existingAssignments)
>   throws HBaseIOException{
> Map correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >