[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2020-01-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006636#comment-17006636
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #229 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/229/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/229//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/229//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/229//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on issue #974: HBASE-23587 The FSYNC_WAL flag does not work on branch-2.x

2020-01-01 Thread GitBox
Apache9 commented on issue #974: HBASE-23587 The FSYNC_WAL flag does not work 
on branch-2.x
URL: https://github.com/apache/hbase/pull/974#issuecomment-570134908
 
 
   Will merge it this night If no further objections, as it is the only blocker 
issue for 2.2.3.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23624) Add a tool to dump the procedure info in HFile

2020-01-01 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-23624.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Pushed to master and branch-2.

Thanks [~stack] for reviewing.

> Add a tool to dump the procedure info in HFile
> --
>
> Key: HBASE-23624
> URL: https://issues.apache.org/jira/browse/HBASE-23624
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23624) Add a tool to dump the procedure info in HFile

2020-01-01 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23624:
--
Release Note: Use ./hbase 
org.apache.hadoop.hbase.procedure2.store.region.HFileProcedurePrettyPrinter to 
run the tool.

> Add a tool to dump the procedure info in HFile
> --
>
> Key: HBASE-23624
> URL: https://issues.apache.org/jira/browse/HBASE-23624
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23624) Add a tool to dump the procedure info in HFile

2020-01-01 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23624:
--
Fix Version/s: 2.3.0
   3.0.0

> Add a tool to dump the procedure info in HFile
> --
>
> Key: HBASE-23624
> URL: https://issues.apache.org/jira/browse/HBASE-23624
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23629) Addition to Supporting projects page

2020-01-01 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006631#comment-17006631
 ] 

HBase QA commented on HBASE-23629:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-976/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hbase/pull/976 |
| JIRA Issue | HBASE-23629 |
| Optional Tests | dupname asflicense mvnsite xml |
| uname | Linux 633d3d547aa4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-976/out/precommit/personality/provided.sh
 |
| git revision | master / 4cb952ce34 |
| Max. process+thread count | 92 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-976/2/console |
| versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
| Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |


This message was automatically generated.



> Addition to Supporting projects page
> 
>
> Key: HBASE-23629
> URL: https://issues.apache.org/jira/browse/HBASE-23629
> Project: HBase
>  Issue Type: Improvement
>Reporter: Manu Manjunath
>Priority: Minor
>
> At [Flipkart|https://flipkart.com/], we are using a light-weight ORM wrapper 
> on top of hbase-client: 
> [hbase-orm|https://github.com/flipkart-incubator/hbase-orm] (open source, 
> Apache 2.0 license)
> It helps Java applications interact with HBase in an object-oriented manner, 
> by encapsulating {{Get}}, {{Put}}, {{Delete}}, {{Append}}, {{Increment}} and 
> other classes in hbase-client.
>   
>  It's published on Maven Central:
>  
> [https://search.maven.org/search?q=g:com.flipkart%20AND%20a:hbase-object-mapper=gav]
> Kindly list this as supporting project. PR to follow.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #976: HBASE-23629: Add to 'Supporting Projects' in site

2020-01-01 Thread GitBox
Apache-HBase commented on issue #976: HBASE-23629: Add to 'Supporting Projects' 
in site
URL: https://github.com/apache/hbase/pull/976#issuecomment-570134224
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 32s |  master passed  |
   | +1 :green_heart: |  mvnsite  |  18m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  2s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  18m  2s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 21s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  48m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-976/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/976 |
   | JIRA Issue | HBASE-23629 |
   | Optional Tests | dupname asflicense mvnsite xml |
   | uname | Linux 633d3d547aa4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-976/out/precommit/personality/provided.sh
 |
   | git revision | master / 4cb952ce34 |
   | Max. process+thread count | 92 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-976/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23588) Cache index blocks and bloom blocks on write if CacheCompactedBlocksOnWrite is enabled

2020-01-01 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006629#comment-17006629
 ] 

Viraj Jasani commented on HBASE-23588:
--

Planning to commit this today

> Cache index blocks and bloom blocks on write if CacheCompactedBlocksOnWrite 
> is enabled
> --
>
> Key: HBASE-23588
> URL: https://issues.apache.org/jira/browse/HBASE-23588
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> The existing behaviour even even cacheOnWrite is enabled is that we don cache 
> the index or bloom blocks. Now with HBASE-23066 in place we also write blocks 
> on compaction. So it may be better to cache the index/bloom blocks also if 
> cacheOnWrite is enabled?
> FYI [~javaman_chen]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
virajjasani commented on a change in pull request #978: HBASE-22285 A 
normalizer which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362385208
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -158,6 +158,12 @@
   public static final String HBASE_MASTER_NORMALIZER_CLASS =
 "hbase.master.normalizer.class";
 
+  /** Config for min age of region before being considerded for merge in 
mormalizer */
+  public static final int DEFAULT_MIN_DAYS_BEFORE_MERGE = 3;
+
+  public static final String HBASE_MASTER_DAYS_BEFORE_MERGE =
+  "hbase.master.normalize.daysBeforeMerge";
+
 
 Review comment:
   I think we can move both these constants to MergeNormalizer. If we move to 
hbase-default, we don't need this constant `DEFAULT_MIN_DAYS_BEFORE_MERGE` 
because `hbase.master.normalize.daysBeforeMerge` will always have value, either 
from hbase-default or hbase-site


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362380543
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/BaseNormalizer.java
 ##
 @@ -0,0 +1,214 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.normalizer;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Size;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.MasterSwitchType;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.MasterRpcServices;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public abstract class BaseNormalizer implements RegionNormalizer {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BaseNormalizer.class);
+  protected MasterServices masterServices;
+  protected MasterRpcServices masterRpcServices;
+
+  /**
+   * Set the master service.
+   * @param masterServices inject instance of MasterServices
+   */
+  @Override
+  public void setMasterServices(MasterServices masterServices) {
+this.masterServices = masterServices;
+  }
+
+  @Override
+  public void setMasterRpcServices(MasterRpcServices masterRpcServices) {
+this.masterRpcServices = masterRpcServices;
+  }
+
+  protected long getRegionSize(RegionInfo hri) {
+ServerName sn = masterServices.getAssignmentManager().getRegionStates().
+  getRegionServerOfRegion(hri);
+RegionMetrics regionLoad = masterServices.getServerManager().getLoad(sn).
+  getRegionMetrics().get(hri.getRegionName());
+if (regionLoad == null) {
+  LOG.debug(hri.getRegionNameAsString() + " was not found in RegionsLoad");
+  return -1;
+}
+return (long) regionLoad.getStoreFileSize().get(Size.Unit.MEGABYTE);
+  }
+
+  protected boolean isMergeEnabled() {
+boolean mergeEnabled = true;
+try {
+  mergeEnabled = masterRpcServices
+.isSplitOrMergeEnabled(null,
+  
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE))
+.getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.warn("Unable to determine whether merge is enabled", e);
+}
+return mergeEnabled;
+  }
+
+  protected boolean isSplitEnabled() {
+boolean splitEnabled = true;
+try {
+  splitEnabled = masterRpcServices
+  .isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT))
+  .getEnabled();
+} catch (ServiceException se) {
+  LOG.warn("Unable to determine whether split is enabled", se);
+}
+return splitEnabled;
+  }
+
+  /**
+   *
+   * @param tableRegions
+   * @return average region size depending on
+   * 
{org.apache.hadoop.hbase.client.TableDescriptor#getNormalizerTargetRegionCount()}
+   *
+   * Also make sure we are sending regions of the same table
+   */
+  protected double getAvgRegionSize(List tableRegions) {
+long totalSizeMb = 0;
+int acutalRegionCnt = 0;
+for (RegionInfo hri : tableRegions) {
+  long regionSize = getRegionSize(hri);
+  // don't consider regions that are in bytes for averaging the size.
+  if (regionSize > 0) {
+acutalRegionCnt++;
+totalSizeMb += regionSize;
+  }
+}
+TableName table = tableRegions.get(0).getTable();
+int targetRegionCount = -1;
+long targetRegionSize = -1;
+try {
+  TableDescriptor tableDescriptor = 
masterServices.getTableDescriptors().get(table);
+  if(tableDescriptor != null) {
+targetRegionCount =
+  tableDescriptor.getNormalizerTargetRegionCount();
+

[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362381889
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -158,6 +158,12 @@
   public static final String HBASE_MASTER_NORMALIZER_CLASS =
 "hbase.master.normalizer.class";
 
+  /** Config for min age of region before being considerded for merge in 
mormalizer */
+  public static final int DEFAULT_MIN_DAYS_BEFORE_MERGE = 3;
+
+  public static final String HBASE_MASTER_DAYS_BEFORE_MERGE =
+  "hbase.master.normalize.daysBeforeMerge";
+
 
 Review comment:
   Agreed. So maybe i can move it to hbase-defaults.xml and read it from there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362380303
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 ##
 @@ -132,18 +107,6 @@ public int compare(NormalizationPlan plan1, 
NormalizationPlan plan2) {
   return null;
 }
 boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether merge is enabled", e);
-}
 
 Review comment:
   Yes missed it. Thanks for pointing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362380234
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 ##
 @@ -241,16 +147,4 @@ public int compare(NormalizationPlan plan1, 
NormalizationPlan plan2) {
 Collections.sort(plans, planComparator);
 return plans;
   }
-
-  private long getRegionSize(RegionInfo hri) {
-ServerName sn = masterServices.getAssignmentManager().getRegionStates().
-  getRegionServerOfRegion(hri);
-RegionMetrics regionLoad = masterServices.getServerManager().getLoad(sn).
-  getRegionMetrics().get(hri.getRegionName());
-if (regionLoad == null) {
-  LOG.debug(hri.getRegionNameAsString() + " was not found in RegionsLoad");
-  return -1;
-}
-return (long) regionLoad.getStoreFileSize().get(Size.Unit.MEGABYTE);
-  }
 
 Review comment:
   Thanks for the review sir. Currently working on test with valid test cases 
of merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362380577
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/BaseNormalizer.java
 ##
 @@ -0,0 +1,214 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.normalizer;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Size;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.MasterSwitchType;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.MasterRpcServices;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public abstract class BaseNormalizer implements RegionNormalizer {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BaseNormalizer.class);
+  protected MasterServices masterServices;
+  protected MasterRpcServices masterRpcServices;
+
+  /**
+   * Set the master service.
+   * @param masterServices inject instance of MasterServices
+   */
+  @Override
+  public void setMasterServices(MasterServices masterServices) {
+this.masterServices = masterServices;
+  }
+
+  @Override
+  public void setMasterRpcServices(MasterRpcServices masterRpcServices) {
+this.masterRpcServices = masterRpcServices;
+  }
+
+  protected long getRegionSize(RegionInfo hri) {
+ServerName sn = masterServices.getAssignmentManager().getRegionStates().
+  getRegionServerOfRegion(hri);
+RegionMetrics regionLoad = masterServices.getServerManager().getLoad(sn).
+  getRegionMetrics().get(hri.getRegionName());
+if (regionLoad == null) {
+  LOG.debug(hri.getRegionNameAsString() + " was not found in RegionsLoad");
+  return -1;
+}
+return (long) regionLoad.getStoreFileSize().get(Size.Unit.MEGABYTE);
+  }
+
+  protected boolean isMergeEnabled() {
+boolean mergeEnabled = true;
+try {
+  mergeEnabled = masterRpcServices
+.isSplitOrMergeEnabled(null,
+  
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE))
+.getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.warn("Unable to determine whether merge is enabled", e);
+}
+return mergeEnabled;
+  }
+
+  protected boolean isSplitEnabled() {
+boolean splitEnabled = true;
+try {
+  splitEnabled = masterRpcServices
+  .isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT))
+  .getEnabled();
+} catch (ServiceException se) {
+  LOG.warn("Unable to determine whether split is enabled", se);
+}
+return splitEnabled;
+  }
+
+  /**
+   *
+   * @param tableRegions
+   * @return average region size depending on
+   * 
{org.apache.hadoop.hbase.client.TableDescriptor#getNormalizerTargetRegionCount()}
+   *
+   * Also make sure we are sending regions of the same table
+   */
+  protected double getAvgRegionSize(List tableRegions) {
+long totalSizeMb = 0;
+int acutalRegionCnt = 0;
+for (RegionInfo hri : tableRegions) {
+  long regionSize = getRegionSize(hri);
+  // don't consider regions that are in bytes for averaging the size.
+  if (regionSize > 0) {
+acutalRegionCnt++;
+totalSizeMb += regionSize;
+  }
+}
+TableName table = tableRegions.get(0).getTable();
+int targetRegionCount = -1;
+long targetRegionSize = -1;
+try {
+  TableDescriptor tableDescriptor = 
masterServices.getTableDescriptors().get(table);
+  if(tableDescriptor != null) {
+targetRegionCount =
+  tableDescriptor.getNormalizerTargetRegionCount();
+

[GitHub] [hbase] mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer which merges small size regions with adjacen…

2020-01-01 Thread GitBox
mnpoonia commented on a change in pull request #978: HBASE-22285 A normalizer 
which merges small size regions with adjacen…
URL: https://github.com/apache/hbase/pull/978#discussion_r362380501
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/BaseNormalizer.java
 ##
 @@ -0,0 +1,214 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.normalizer;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Size;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.MasterSwitchType;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.MasterRpcServices;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public abstract class BaseNormalizer implements RegionNormalizer {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BaseNormalizer.class);
+  protected MasterServices masterServices;
+  protected MasterRpcServices masterRpcServices;
+
+  /**
+   * Set the master service.
+   * @param masterServices inject instance of MasterServices
+   */
+  @Override
+  public void setMasterServices(MasterServices masterServices) {
+this.masterServices = masterServices;
+  }
+
+  @Override
+  public void setMasterRpcServices(MasterRpcServices masterRpcServices) {
+this.masterRpcServices = masterRpcServices;
+  }
+
+  protected long getRegionSize(RegionInfo hri) {
+ServerName sn = masterServices.getAssignmentManager().getRegionStates().
+  getRegionServerOfRegion(hri);
+RegionMetrics regionLoad = masterServices.getServerManager().getLoad(sn).
+  getRegionMetrics().get(hri.getRegionName());
+if (regionLoad == null) {
+  LOG.debug(hri.getRegionNameAsString() + " was not found in RegionsLoad");
+  return -1;
+}
+return (long) regionLoad.getStoreFileSize().get(Size.Unit.MEGABYTE);
+  }
+
+  protected boolean isMergeEnabled() {
+boolean mergeEnabled = true;
+try {
+  mergeEnabled = masterRpcServices
+.isSplitOrMergeEnabled(null,
+  
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE))
+.getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.warn("Unable to determine whether merge is enabled", e);
+}
+return mergeEnabled;
+  }
+
+  protected boolean isSplitEnabled() {
+boolean splitEnabled = true;
+try {
+  splitEnabled = masterRpcServices
+  .isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT))
+  .getEnabled();
+} catch (ServiceException se) {
+  LOG.warn("Unable to determine whether split is enabled", se);
+}
+return splitEnabled;
+  }
+
+  /**
+   *
+   * @param tableRegions
+   * @return average region size depending on
+   * 
{org.apache.hadoop.hbase.client.TableDescriptor#getNormalizerTargetRegionCount()}
+   *
+   * Also make sure we are sending regions of the same table
+   */
+  protected double getAvgRegionSize(List tableRegions) {
+long totalSizeMb = 0;
+int acutalRegionCnt = 0;
+for (RegionInfo hri : tableRegions) {
+  long regionSize = getRegionSize(hri);
+  // don't consider regions that are in bytes for averaging the size.
+  if (regionSize > 0) {
+acutalRegionCnt++;
+totalSizeMb += regionSize;
+  }
+}
+TableName table = tableRegions.get(0).getTable();
+int targetRegionCount = -1;
+long targetRegionSize = -1;
+try {
+  TableDescriptor tableDescriptor = 
masterServices.getTableDescriptors().get(table);
+  if(tableDescriptor != null) {
+targetRegionCount =
+  tableDescriptor.getNormalizerTargetRegionCount();
+

[GitHub] [hbase] m-manu commented on issue #976: HBASE-23629: Add to 'Supporting Projects' in site

2020-01-01 Thread GitBox
m-manu commented on issue #976: HBASE-23629: Add to 'Supporting Projects' in 
site
URL: https://github.com/apache/hbase/pull/976#issuecomment-570128249
 
 
   Removed whitespace at the end of the tag based on recommendation from HBase 
Robot.
   
   Added JIRA ticket to title of the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23629) Addition to Supporting projects page

2020-01-01 Thread Manu Manjunath (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manu Manjunath updated HBASE-23629:
---
Description: 
At [Flipkart|https://flipkart.com/], we are using a light-weight ORM wrapper on 
top of hbase-client: 
[hbase-orm|https://github.com/flipkart-incubator/hbase-orm] (open source, 
Apache 2.0 license)

It helps Java applications interact with HBase in an object-oriented manner, by 
encapsulating {{Get}}, {{Put}}, {{Delete}}, {{Append}}, {{Increment}} and other 
classes in hbase-client.
  
 It's published on Maven Central:
 
[https://search.maven.org/search?q=g:com.flipkart%20AND%20a:hbase-object-mapper=gav]

Kindly list this as supporting project. PR to follow.
 

  was:
At [Flipkart|https://flipkart.com/], we are using a light-weight ORM wrapper on 
top of hbase-client: 
[hbase-orm|https://github.com/flipkart-incubator/hbase-orm] (open source, 
Apache 2.0 license)

It helps Java applications interact with HBase in an object-oriented manner, by 
encapsulating {{Get}}, {{Put}}, {{Delete}}, {{Append}}, {{Increment}} and other 
classes in hbase-client.
  
 It's published on Maven Central:
 
[https://search.maven.org/search?q=g:com.flipkart%20AND%20a:hbase-object-mapper=gav]


> Addition to Supporting projects page
> 
>
> Key: HBASE-23629
> URL: https://issues.apache.org/jira/browse/HBASE-23629
> Project: HBase
>  Issue Type: Improvement
>Reporter: Manu Manjunath
>Priority: Minor
>
> At [Flipkart|https://flipkart.com/], we are using a light-weight ORM wrapper 
> on top of hbase-client: 
> [hbase-orm|https://github.com/flipkart-incubator/hbase-orm] (open source, 
> Apache 2.0 license)
> It helps Java applications interact with HBase in an object-oriented manner, 
> by encapsulating {{Get}}, {{Put}}, {{Delete}}, {{Append}}, {{Increment}} and 
> other classes in hbase-client.
>   
>  It's published on Maven Central:
>  
> [https://search.maven.org/search?q=g:com.flipkart%20AND%20a:hbase-object-mapper=gav]
> Kindly list this as supporting project. PR to follow.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23629) Addition to Supporting projects page

2020-01-01 Thread Manu Manjunath (Jira)
Manu Manjunath created HBASE-23629:
--

 Summary: Addition to Supporting projects page
 Key: HBASE-23629
 URL: https://issues.apache.org/jira/browse/HBASE-23629
 Project: HBase
  Issue Type: Improvement
Reporter: Manu Manjunath


At [Flipkart|https://flipkart.com/], we are using a light-weight ORM wrapper on 
top of hbase-client: 
[hbase-orm|https://github.com/flipkart-incubator/hbase-orm] (open source, 
Apache 2.0 license)

It helps Java applications interact with HBase in an object-oriented manner, by 
encapsulating {{Get}}, {{Put}}, {{Delete}}, {{Append}}, {{Increment}} and other 
classes in hbase-client.
  
 It's published on Maven Central:
 
[https://search.maven.org/search?q=g:com.flipkart%20AND%20a:hbase-object-mapper=gav]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 merged pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 merged pull request #975: HBASE-23624 Add a tool to dump the procedure 
info in HFile
URL: https://github.com/apache/hbase/pull/975
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on issue #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on issue #975: HBASE-23624 Add a tool to dump the procedure 
info in HFile
URL: https://github.com/apache/hbase/pull/975#issuecomment-570110295
 
 
   The failed UT can pass locally, and the the problem in pre commit is 
CallQueueTooBig, seems something wrong at the RS side, so should not be related 
with the patch here. Let me merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL flag does not work on branch-2.x

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL 
flag does not work on branch-2.x
URL: https://github.com/apache/hbase/pull/974#discussion_r362363283
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 ##
 @@ -577,10 +577,9 @@ public void run() {
   //TraceScope scope = Trace.continueSpan(takeSyncFuture.getSpan());
   long start = System.nanoTime();
   Throwable lastException = null;
-  boolean wasRollRequested = false;
   try {
 TraceUtil.addTimelineAnnotation("syncing writer");
-writer.sync(useHsync);
+writer.sync(takeSyncFuture.isForceSync());
 
 Review comment:
   Since you can change the durability by mutation, you can not use different 
WALs for different mutations for the same region? But anyway, you can implement 
a multi wal strategy to use different WALs for different durability levels.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL flag does not work on branch-2.x

2020-01-01 Thread GitBox
infraio commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL 
flag does not work on branch-2.x
URL: https://github.com/apache/hbase/pull/974#discussion_r362356442
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 ##
 @@ -577,10 +577,9 @@ public void run() {
   //TraceScope scope = Trace.continueSpan(takeSyncFuture.getSpan());
   long start = System.nanoTime();
   Throwable lastException = null;
-  boolean wasRollRequested = false;
   try {
 TraceUtil.addTimelineAnnotation("syncing writer");
-writer.sync(useHsync);
+writer.sync(takeSyncFuture.isForceSync());
 
 Review comment:
   So the desion allow different table/mutation use different durability, but 
these table share one WAL writer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23590) Update maxStoreFileRefCount to maxCompactedStoreFileRefCount

2020-01-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006520#comment-17006520
 ] 

Hudson commented on HBASE-23590:


Results for branch branch-2
[build #2403 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2403/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2403//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2403//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2403//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update maxStoreFileRefCount to maxCompactedStoreFileRefCount
> 
>
> Key: HBASE-23590
> URL: https://issues.apache.org/jira/browse/HBASE-23590
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0, 1.6.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> As per discussion on HBASE-23349, RegionsRecoveryChore should use max 
> refCount on compacted away store files and not on new store files to 
> determine when to reopen the region. Although work on HBASE-23349 is in 
> progress, we need to at least update the metric to get the desired refCount 
> i.e. max refCount among all compacted away store files for a given region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] HorizonNet merged pull request #970: HBASE-23623 Reduced the number of Checkstyle violations in hbase-rest

2020-01-01 Thread GitBox
HorizonNet merged pull request #970: HBASE-23623 Reduced the number of 
Checkstyle violations in hbase-rest
URL: https://github.com/apache/hbase/pull/970
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #948: HBASE-23588 : Cache index & bloom blocks on write if CacheCompactedBl…

2020-01-01 Thread GitBox
Apache-HBase commented on issue #948: HBASE-23588 : Cache index & bloom blocks 
on write if CacheCompactedBl…
URL: https://github.com/apache/hbase/pull/948#issuecomment-570087910
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 46s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 56s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 35s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 41s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 150m 36s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 212m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-948/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/948 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 0646e1f95bf5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-948/out/precommit/personality/provided.sh
 |
   | git revision | master / e32dbe8ed2 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-948/8/testReport/
 |
   | Max. process+thread count | 4906 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-948/8/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23627) Resolve remaining Checkstyle violations in hbase-thrift

2020-01-01 Thread Jan Hentschel (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-23627:
--
Fix Version/s: 2.2.3
   2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Resolve remaining Checkstyle violations in hbase-thrift
> ---
>
> Key: HBASE-23627
> URL: https://issues.apache.org/jira/browse/HBASE-23627
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> There are four Checkstyle violations remaining in {{hbase-thrift}}, which 
> should be resolved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] HorizonNet merged pull request #973: HBASE-23627 Resolved remaining Checkstyle violations in hbase-thrift

2020-01-01 Thread GitBox
HorizonNet merged pull request #973: HBASE-23627 Resolved remaining Checkstyle 
violations in hbase-thrift
URL: https://github.com/apache/hbase/pull/973
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23625) Reduce number of Checkstyle violations in hbase-common

2020-01-01 Thread Jan Hentschel (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-23625:
--
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Reduce number of Checkstyle violations in hbase-common
> --
>
> Key: HBASE-23625
> URL: https://issues.apache.org/jira/browse/HBASE-23625
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0
>
>
> In {{hbase-common}} Checkstyle reports a lot of violations. The number of 
> violations should be reduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] HorizonNet merged pull request #971: HBASE-23625 Reduced number of Checkstyle violations in hbase-common

2020-01-01 Thread GitBox
HorizonNet merged pull request #971: HBASE-23625 Reduced number of Checkstyle 
violations in hbase-common
URL: https://github.com/apache/hbase/pull/971
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2020-01-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006489#comment-17006489
 ] 

Hudson commented on HBASE-18095:


Results for branch HBASE-18095/client-locate-meta-no-zookeeper
[build #26 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/26/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/26//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/26//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/26//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23349) Low refCount preventing archival of compacted away files

2020-01-01 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006485#comment-17006485
 ] 

Lars Hofhansl commented on HBASE-23349:
---

Sure.

[~ram_krish], [~anoop.hbase], FYI. I know you guys invested a lot of time in 
this. In light of the issues I'm in favor removing the refcounting code and 
restoring the old behavior. Let's have a discussion.

 

> Low refCount preventing archival of compacted away files
> 
>
> Key: HBASE-23349
> URL: https://issues.apache.org/jira/browse/HBASE-23349
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 1.6.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> We have observed that refCount on compacted away store files as low as 1 is 
> prevent archival.
> {code:java}
> regionserver.HStore - Can't archive compacted file 
> hdfs://{{root-dir}}/hbase/data/default/t1/12a9e1112e0371955b3db8d3ebb2d298/cf1/73b72f5ddfce4a34a9e01afe7b83c1f9
>  because of either isCompactedAway=true or file has reference, 
> isReferencedInReads=true, refCount=1, skipping for now.
> {code}
> We should come up with core code (run as part of discharger thread) 
> gracefully resolve reader lock issue by resetting ongoing scanners to start 
> pointing to new store files instead of compacted away store files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache-HBase commented on issue #975: HBASE-23624 Add a tool to dump the 
procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#issuecomment-570075772
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 25s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 11s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  hbase-common: The patch 
generated 0 new + 27 unchanged - 1 fixed = 27 total (was 28)  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  The patch passed checkstyle 
in hbase-server  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 47s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 18s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  | 152m 22s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 218m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestAsyncTable |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-975/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux e6d6f9daea5e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-975/out/precommit/personality/provided.sh
 |
   | git revision | master / 06eff551c3 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-975/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-975/2/testReport/
 |
   | Max. process+thread count | 4905 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-975/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23590) Update maxStoreFileRefCount to maxCompactedStoreFileRefCount

2020-01-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23590:
-
Release Note: 
RegionsRecoveryChore introduced as part of HBASE-22460 tries to reopen regions 
based on config: hbase.regions.recovery.store.file.ref.count.
Region reopen needs to take into consideration all compacted away store files 
that belong to the region and not store files(non-compacted).

Fixed this bug as part of this Jira. 
Updated description for corresponding configs:

1. hbase.master.regions.recovery.check.interval :

Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them. Defaults to 20 mins

2. hbase.regions.recovery.store.file.ref.count :

Very large number of ref count on a compacted store file indicates that it is a 
ref leak on that object(compacted store file). Such files can not be removed 
after it is invalidated via compaction. Only way to recover in such scenario is 
to reopen the region which can release all resources, like the refcount, 
leases, etc. This config represents Store files Ref Count threshold value 
considered for reopening regions. Any region with compacted store files ref 
count > this value would be eligible for reopening by master. Here, we get the 
max refCount among all refCounts on all compacted away store files that belong 
to a particular region. Default value -1 indicates this feature is turned off. 
Only positive integer value should be provided to enable this feature.

  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Update maxStoreFileRefCount to maxCompactedStoreFileRefCount
> 
>
> Key: HBASE-23590
> URL: https://issues.apache.org/jira/browse/HBASE-23590
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0, 1.6.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> As per discussion on HBASE-23349, RegionsRecoveryChore should use max 
> refCount on compacted away store files and not on new store files to 
> determine when to reopen the region. Although work on HBASE-23349 is in 
> progress, we need to at least update the metric to get the desired refCount 
> i.e. max refCount among all compacted away store files for a given region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23213) Backport HBASE-22460 to branch-1

2020-01-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23213:
-
Fix Version/s: (was: 1.6.1)
   1.6.0

> Backport HBASE-22460 to branch-1
> 
>
> Key: HBASE-23213
> URL: https://issues.apache.org/jira/browse/HBASE-23213
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.5.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 1.6.0
>
>
> Backport HBASE-22460 (Reopen a region if store reader references may have 
> leaked) to branch-1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani closed pull request #950: HBASE-23590 : Update maxStoreFileRefCount to maxCompactedStoreFileRef…

2020-01-01 Thread GitBox
virajjasani closed pull request #950: HBASE-23590 : Update maxStoreFileRefCount 
to maxCompactedStoreFileRef…
URL: https://github.com/apache/hbase/pull/950
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362328873
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure2/store/region/HFileProcedurePrettyPrinter.java
 ##
 @@ -0,0 +1,174 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.procedure2.store.region;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.Option;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.OptionGroup;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos;
+
+/**
+ * A tool to dump the procedures in the HFiles.
+ * 
+ * The different between this and {@link 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} is
+ * that, this class will decode the procedure in the cell for better 
debugging. You are free to use
+ * {@link org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} to dump the 
same file as well.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+@InterfaceStability.Evolving
+public class HFileProcedurePrettyPrinter extends AbstractHBaseTool {
+
+  private Long procId;
+
+  private List files = new ArrayList<>();
+
+  private final PrintStream out;
+
+  public HFileProcedurePrettyPrinter() {
+this(System.out);
+  }
+
+  public HFileProcedurePrettyPrinter(PrintStream out) {
+this.out = out;
+  }
+
+  @Override
+  protected void addOptions() {
+addOptWithArg("w", "seekToPid", "Seek to this procedure id and print this 
procedure only");
+OptionGroup files = new OptionGroup();
+files.addOption(new Option("f", "file", true,
+  "File to scan. Pass full-path; e.g. 
hdfs://a:9000/MasterProcs/master/procedure/p/xxx"));
+files.addOption(new Option("a", "all", false, "Scan the whole procedure 
region."));
+options.addOptionGroup(files);
+  }
+
+  private void addAllHFiles() throws IOException {
+Path masterProcDir =
+  new Path(CommonFSUtils.getWALRootDir(conf), 
RegionProcedureStore.MASTER_PROCEDURE_DIR);
+Path tableDir = CommonFSUtils.getTableDir(masterProcDir, 
RegionProcedureStore.TABLE_NAME);
+FileSystem fs = tableDir.getFileSystem(conf);
+Path regionDir =
+  fs.listStatus(tableDir, p -> 
RegionInfo.isEncodedRegionName(Bytes.toBytes(p.getName([0]
+.getPath();
+List regionFiles = HFile.getStoreFiles(fs, regionDir);
+files.addAll(regionFiles);
+  }
+
+  @Override
+  protected void processOptions(CommandLine cmd) {
+if (cmd.hasOption("w")) {
+  String key = cmd.getOptionValue("w");
+  if (key != null && key.length() != 0) {
+procId = Long.parseLong(key);
+  } else {
+throw new IllegalArgumentException("Invalid row is specified.");
+  }
+}
+if (cmd.hasOption("f")) {
+  files.add(new Path(cmd.getOptionValue("f")));
+}
+if (cmd.hasOption("a")) {
+  try {
+addAllHFiles();
+  } catch (IOException e) {
+throw new UncheckedIOException(e);
+  }
+}
+  }
 
 Review comment:
   Set the optional group to required, so if no 

[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362322393
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure2/store/region/HFileProcedurePrettyPrinter.java
 ##
 @@ -0,0 +1,174 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.procedure2.store.region;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.Option;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.OptionGroup;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos;
+
+/**
+ * A tool to dump the procedures in the HFiles.
+ * 
+ * The different between this and {@link 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} is
+ * that, this class will decode the procedure in the cell for better 
debugging. You are free to use
+ * {@link org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} to dump the 
same file as well.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+@InterfaceStability.Evolving
+public class HFileProcedurePrettyPrinter extends AbstractHBaseTool {
+
+  private Long procId;
+
+  private List files = new ArrayList<>();
+
+  private final PrintStream out;
+
+  public HFileProcedurePrettyPrinter() {
+this(System.out);
+  }
+
+  public HFileProcedurePrettyPrinter(PrintStream out) {
+this.out = out;
+  }
+
+  @Override
+  protected void addOptions() {
+addOptWithArg("w", "seekToPid", "Seek to this procedure id and print this 
procedure only");
+OptionGroup files = new OptionGroup();
+files.addOption(new Option("f", "file", true,
+  "File to scan. Pass full-path; e.g. 
hdfs://a:9000/MasterProcs/master/procedure/p/xxx"));
+files.addOption(new Option("a", "all", false, "Scan the whole procedure 
region."));
+options.addOptionGroup(files);
+  }
+
+  private void addAllHFiles() throws IOException {
+Path masterProcDir =
+  new Path(CommonFSUtils.getWALRootDir(conf), 
RegionProcedureStore.MASTER_PROCEDURE_DIR);
+Path tableDir = CommonFSUtils.getTableDir(masterProcDir, 
RegionProcedureStore.TABLE_NAME);
+FileSystem fs = tableDir.getFileSystem(conf);
+Path regionDir =
+  fs.listStatus(tableDir, p -> 
RegionInfo.isEncodedRegionName(Bytes.toBytes(p.getName([0]
+.getPath();
+List regionFiles = HFile.getStoreFiles(fs, regionDir);
+files.addAll(regionFiles);
+  }
+
+  @Override
+  protected void processOptions(CommandLine cmd) {
+if (cmd.hasOption("w")) {
+  String key = cmd.getOptionValue("w");
+  if (key != null && key.length() != 0) {
+procId = Long.parseLong(key);
+  } else {
+throw new IllegalArgumentException("Invalid row is specified.");
+  }
+}
+if (cmd.hasOption("f")) {
+  files.add(new Path(cmd.getOptionValue("f")));
+}
+if (cmd.hasOption("a")) {
+  try {
+addAllHFiles();
+  } catch (IOException e) {
+throw new UncheckedIOException(e);
+  }
+}
+  }
+
+  private void printCell(Cell cell) throws IOException {
+

[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362321526
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure2/store/region/RegionProcedureStore.java
 ##
 @@ -112,17 +112,15 @@
 
   static final String LOGCLEANER_PLUGINS = 
"hbase.procedure.store.region.logcleaner.plugins";
 
-  private static final String DATA_DIR = "data";
-
   private static final String REPLAY_EDITS_DIR = "replay";
 
 Review comment:
   Yes, this will be a good name. Let me change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362321506
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure2/store/region/HFileProcedurePrettyPrinter.java
 ##
 @@ -0,0 +1,174 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.procedure2.store.region;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.Option;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.OptionGroup;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos;
+
+/**
+ * A tool to dump the procedures in the HFiles.
+ * 
+ * The different between this and {@link 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} is
+ * that, this class will decode the procedure in the cell for better 
debugging. You are free to use
+ * {@link org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} to dump the 
same file as well.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+@InterfaceStability.Evolving
+public class HFileProcedurePrettyPrinter extends AbstractHBaseTool {
+
+  private Long procId;
+
+  private List files = new ArrayList<>();
+
+  private final PrintStream out;
+
+  public HFileProcedurePrettyPrinter() {
+this(System.out);
+  }
+
+  public HFileProcedurePrettyPrinter(PrintStream out) {
+this.out = out;
+  }
+
+  @Override
+  protected void addOptions() {
+addOptWithArg("w", "seekToPid", "Seek to this procedure id and print this 
procedure only");
+OptionGroup files = new OptionGroup();
+files.addOption(new Option("f", "file", true,
+  "File to scan. Pass full-path; e.g. 
hdfs://a:9000/MasterProcs/master/procedure/p/xxx"));
+files.addOption(new Option("a", "all", false, "Scan the whole procedure 
region."));
+options.addOptionGroup(files);
+  }
+
+  private void addAllHFiles() throws IOException {
+Path masterProcDir =
+  new Path(CommonFSUtils.getWALRootDir(conf), 
RegionProcedureStore.MASTER_PROCEDURE_DIR);
+Path tableDir = CommonFSUtils.getTableDir(masterProcDir, 
RegionProcedureStore.TABLE_NAME);
+FileSystem fs = tableDir.getFileSystem(conf);
+Path regionDir =
+  fs.listStatus(tableDir, p -> 
RegionInfo.isEncodedRegionName(Bytes.toBytes(p.getName([0]
+.getPath();
+List regionFiles = HFile.getStoreFiles(fs, regionDir);
+files.addAll(regionFiles);
+  }
+
+  @Override
+  protected void processOptions(CommandLine cmd) {
+if (cmd.hasOption("w")) {
+  String key = cmd.getOptionValue("w");
+  if (key != null && key.length() != 0) {
+procId = Long.parseLong(key);
+  } else {
+throw new IllegalArgumentException("Invalid row is specified.");
+  }
+}
+if (cmd.hasOption("f")) {
+  files.add(new Path(cmd.getOptionValue("f")));
+}
+if (cmd.hasOption("a")) {
+  try {
+addAllHFiles();
+  } catch (IOException e) {
+throw new UncheckedIOException(e);
+  }
+}
+  }
 
 Review comment:
   Good point. I haven't considered this. Let me 

[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362321486
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure2/store/region/HFileProcedurePrettyPrinter.java
 ##
 @@ -0,0 +1,174 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.procedure2.store.region;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.io.UncheckedIOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.procedure2.Procedure;
+import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.Option;
+import org.apache.hbase.thirdparty.org.apache.commons.cli.OptionGroup;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos;
+
+/**
+ * A tool to dump the procedures in the HFiles.
+ * 
+ * The different between this and {@link 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} is
+ * that, this class will decode the procedure in the cell for better 
debugging. You are free to use
+ * {@link org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter} to dump the 
same file as well.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+@InterfaceStability.Evolving
+public class HFileProcedurePrettyPrinter extends AbstractHBaseTool {
+
+  private Long procId;
+
+  private List files = new ArrayList<>();
+
+  private final PrintStream out;
+
+  public HFileProcedurePrettyPrinter() {
+this(System.out);
+  }
+
+  public HFileProcedurePrettyPrinter(PrintStream out) {
+this.out = out;
+  }
+
+  @Override
+  protected void addOptions() {
+addOptWithArg("w", "seekToPid", "Seek to this procedure id and print this 
procedure only");
+OptionGroup files = new OptionGroup();
+files.addOption(new Option("f", "file", true,
+  "File to scan. Pass full-path; e.g. 
hdfs://a:9000/MasterProcs/master/procedure/p/xxx"));
+files.addOption(new Option("a", "all", false, "Scan the whole procedure 
region."));
 
 Review comment:
   I did not use the -f option for WALProcedurePrettyPrinter but here I think 
it is better to have a -f. As we have a -a option which is used to scan all the 
files, which means we do not need to specify any 'args' here. But if we want to 
purge the -f option then we need to pass the files as args. And use a directory 
is not straight forward here, as a region can have multiple families, which are 
also sub directories under the region directory, then we need another -r option 
to recursively scan the directory? Seems more complicated then...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to dump the procedure info in HFile

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #975: HBASE-23624 Add a tool to 
dump the procedure info in HFile
URL: https://github.com/apache/hbase/pull/975#discussion_r362321218
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
 ##
 @@ -598,19 +598,28 @@ public static void setTimestamp(Cell cell, byte[] ts, 
int tsOffset) throws IOExc
   }
 
   /**
-   * @param cell
* @return The Key portion of the passed cell as a String.
*/
   public static String getCellKeyAsString(Cell cell) {
-StringBuilder sb = new StringBuilder(Bytes.toStringBinary(
-  cell.getRowArray(), cell.getRowOffset(), cell.getRowLength()));
+return getCellKeyAsString(cell,
+  c -> Bytes.toStringBinary(c.getRowArray(), c.getRowOffset(), 
c.getRowLength()));
+  }
+
+  /**
+   * @param cell the cell to convert
+   * @param rowConverter used to convert the row of the cell to a string
+   * @return The Key portion of the passed cell as a String.
+   */
+  public static String getCellKeyAsString(Cell cell, Function 
rowConverter) {
+StringBuilder sb = new StringBuilder(rowConverter.apply(cell));
 sb.append('/');
-sb.append(cell.getFamilyLength() == 0? "":
-  Bytes.toStringBinary(cell.getFamilyArray(), cell.getFamilyOffset(), 
cell.getFamilyLength()));
-// KeyValue only added ':' if family is non-null.  Do same.
+sb.append(cell.getFamilyLength() == 0 ? ""
+  : Bytes.toStringBinary(cell.getFamilyArray(), cell.getFamilyOffset(),
+cell.getFamilyLength()));
+// KeyValue only added ':' if family is non-null. Do same.
 if (cell.getFamilyLength() > 0) sb.append(':');
-sb.append(cell.getQualifierLength() == 0? "":
-  Bytes.toStringBinary(cell.getQualifierArray(), cell.getQualifierOffset(),
+sb.append(cell.getQualifierLength() == 0 ? ""
+  : Bytes.toStringBinary(cell.getQualifierArray(), 
cell.getQualifierOffset(),
 
 Review comment:
   Let me check my formatter config...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL flag does not work on branch-2.x

2020-01-01 Thread GitBox
Apache9 commented on a change in pull request #974: HBASE-23587 The FSYNC_WAL 
flag does not work on branch-2.x
URL: https://github.com/apache/hbase/pull/974#discussion_r362321150
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 ##
 @@ -577,10 +577,9 @@ public void run() {
   //TraceScope scope = Trace.continueSpan(takeSyncFuture.getSpan());
   long start = System.nanoTime();
   Throwable lastException = null;
-  boolean wasRollRequested = false;
   try {
 TraceUtil.addTimelineAnnotation("syncing writer");
-writer.sync(useHsync);
+writer.sync(takeSyncFuture.isForceSync());
 
 Review comment:
   This was also confusing me a bit but after investigating, I think the logic 
is correct. As this config can be changed at table level or even mutation level 
by design, we should follow what we have in the SyncFuture, as it is set when 
syncing the WAL in HRegion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services