[GitHub] [hbase] shardul-cr7 commented on a change in pull request #598: HBASE-22142 Space quota: If table inside namespace having space quota is dropped, data size usage is still considered for the d

2019-09-19 Thread GitBox
shardul-cr7 commented on a change in pull request #598: HBASE-22142 Space 
quota: If table inside namespace having space quota is dropped, data size usage 
is still considered for the drop table.
URL: https://github.com/apache/hbase/pull/598#discussion_r326003734
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotasObserver.java
 ##
 @@ -65,8 +81,11 @@ public void postDeleteTable(
 }
 final Connection conn = ctx.getEnvironment().getConnection();
 Quotas quotas = QuotaUtil.getTableQuota(conn, tableName);
+Quotas quotasAtNamespace = QuotaUtil.getNamespaceQuota(conn, 
tableName.getNamespaceAsString());
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bsglz opened a new pull request #640: HBASE-23008 put batch with updated position into queue even empty

2019-09-19 Thread GitBox
bsglz opened a new pull request #640: HBASE-23008 put batch with updated 
position into queue even empty
URL: https://github.com/apache/hbase/pull/640
 
 
   In order to trigger cleanOldLogs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326027743
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
 ##
 @@ -1506,5 +1508,63 @@
*   The return value will be wrapped by a {@link CompletableFuture}.
*/
   CompletableFuture isSnapshotCleanupEnabled();
+  
+  /**
+   * Gets group info for the given group name
+   * @param groupName the group name
+   * @return group info
+   */
+  CompletableFuture getRSGroupInfo(String groupName);
+
+  /**
+   * Move given set of servers to the specified target RegionServer group
+   * @param servers set of servers to move
+   * @param targetGroup the group to move servers to
+   */
+  CompletableFuture moveServers(Set servers, String 
targetGroup);
+
+  /**
+   * Creates a new RegionServer group with the given name
+   * @param groupName the name of the group
+   */
+  CompletableFuture addRSGroup(String groupName);
+
+  /**
+   * Removes RegionServer group associated with the given name
+   * @param groupName the group name
+   */
+  CompletableFuture removeRSGroup(String groupName);
+
+  /**
+   * Balance regions in the given RegionServer group
+   * @param groupName the group name
+   * @return boolean Whether balance ran or not
+   */
+  CompletableFuture balanceRSGroup(String groupName);
+
+  /**
+   * Lists current set of RegionServer groups
+   */
+  CompletableFuture> listRSGroups();
+
+  /**
+   * Retrieve the RSGroupInfo a server is affiliated to
+   * @param hostPort HostPort to get RSGroupInfo for
+   */
+  CompletableFuture getRSGroupOfServer(Address 
hostPort);
+
+  /**
+   * Remove decommissioned servers from group
+   * 1. Sometimes we may find the server aborted due to some hardware failure 
and we must offline
+   * the server for repairing. Or we need to move some servers to join other 
clusters.
+   * So we need to remove these servers from the group.
+   * 2. Dead/recovering/live servers will be disallowed.
+   * @param servers set of servers to remove
+   */
+  CompletableFuture removeServers(Set servers);
+
+  CompletableFuture getRSGroupInfoOfTable(TableName tableName);
 
 Review comment:
   Sorry, I don't understand 'default implementation'...
   I used Admin and AsyncAdmin client to implements all methods in 
RSGroupAdminClient. This is convenient for using parameterized testing in UTs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23005) Table UI showed exception message when table is disabled

2019-09-19 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-23005.

Resolution: Fixed

> Table UI showed exception message when table is disabled
> 
>
> Key: HBASE-23005
> URL: https://issues.apache.org/jira/browse/HBASE-23005
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: 2019-09-11 09-49-29屏幕截图.png
>
>
> Compaction
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:299)org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)javax.servlet.http.HttpServlet.service(HttpServlet.java:790)org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1391)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)org.eclipse.jetty.server.Server.handle(Server.java:539)org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)java.lang.Thread.run(Thread.java:748)
>  Unknown



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23005) Table UI showed exception message when table is disabled

2019-09-19 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-23005:
---
Fix Version/s: 2.2.2
   2.1.7
   2.3.0
   3.0.0

> Table UI showed exception message when table is disabled
> 
>
> Key: HBASE-23005
> URL: https://issues.apache.org/jira/browse/HBASE-23005
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: 2019-09-11 09-49-29屏幕截图.png
>
>
> Compaction
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:299)org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)javax.servlet.http.HttpServlet.service(HttpServlet.java:790)org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1391)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)org.eclipse.jetty.server.Server.handle(Server.java:539)org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)java.lang.Thread.run(Thread.java:748)
>  Unknown



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933141#comment-16933141
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #117 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/117/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/117//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/117//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/117//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/117//console].


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23046) Remove compatibility case from truncate command

2019-09-19 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-23046:
--
Fix Version/s: 2.3.0
   3.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks [~busbey] for reviewing!

> Remove compatibility case from truncate command
> ---
>
> Key: HBASE-23046
> URL: https://issues.apache.org/jira/browse/HBASE-23046
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>
> The truncate and truncate_preserve commands in shell have a compatibility 
> block to handle the case when Master does not have truncate command.
> This was added in HBASE-8332 for HBase 0.99 so it is safe to remove it now.
> The current compatibility block catches DoNotRetryIOException which can hide 
> different kind of errors and just drops and recreates the table.
> {code:ruby}
> begin
>   puts 'Truncating table...'
>   @admin.truncateTable(table_name, false)
> rescue => e
>   # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
>   raise e unless e.respond_to?(:cause) && !e.cause.nil?
>   rootCause = e.cause
>   if rootCause.is_a?(org.apache.hadoop.hbase.DoNotRetryIOException)
> # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
> puts 'Dropping table...'
> @admin.deleteTable(table_name)
> puts 'Creating table...'
> @admin.createTable(table_description)
>   else
> raise e
>   end
> end
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r326087234
 
 

 ##
 File path: 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.metrics.Meter;
+import org.apache.hadoop.hbase.metrics.MetricRegistry;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation of {@link MetricsTableQueryMeter} to track query per second 
for one table in
+ * a RegionServer.
 
 Review comment:
   Doc is wrong? Not for one table?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326024232
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
 ##
 @@ -1506,5 +1508,63 @@
*   The return value will be wrapped by a {@link CompletableFuture}.
*/
   CompletableFuture isSnapshotCleanupEnabled();
+  
+  /**
+   * Gets group info for the given group name
+   * @param groupName the group name
+   * @return group info
+   */
+  CompletableFuture getRSGroupInfo(String groupName);
+
+  /**
+   * Move given set of servers to the specified target RegionServer group
+   * @param servers set of servers to move
+   * @param targetGroup the group to move servers to
+   */
+  CompletableFuture moveServers(Set servers, String 
targetGroup);
+
+  /**
+   * Creates a new RegionServer group with the given name
+   * @param groupName the name of the group
+   */
+  CompletableFuture addRSGroup(String groupName);
+
+  /**
+   * Removes RegionServer group associated with the given name
+   * @param groupName the group name
+   */
+  CompletableFuture removeRSGroup(String groupName);
+
+  /**
+   * Balance regions in the given RegionServer group
+   * @param groupName the group name
+   * @return boolean Whether balance ran or not
+   */
+  CompletableFuture balanceRSGroup(String groupName);
 
 Review comment:
   They will be reassigned to group servers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326068170
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -68,15 +97,18 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
   // The next setting length will throw exception,logging this message
   // is just used for the detail reason of exception,
   String msg = "Only " + StringUtils.byteDesc(totalSpace)
-  + " total space under " + filePath + ", not enough for requested 
"
-  + StringUtils.byteDesc(sizePerFile);
++ " total space under " + filePath + ", not enough for requested "
++ StringUtils.byteDesc(sizePerFile);
   LOG.warn(msg);
 }
-rafs[i].setLength(sizePerFile);
+File file = new File(filePath);
+if (file.length() != sizePerFile) {
+  rafs[i].setLength(sizePerFile);
+}
 
 Review comment:
   Please add a comment block `//` to clarify the purpose.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on issue #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on issue #633: HBASE-22890 Verify the file integrity in 
persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#issuecomment-533041719
 
 
   Ping @anoopsjohn, when you have time. I think it cleaner than before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23047) ChecksumUtil.validateChecksum logs an INFO message inside a "if(LOG.isTraceEnabled())" block.

2019-09-19 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-23047:
-
Attachment: HBASE-23047.master.002.patch

> ChecksumUtil.validateChecksum logs an INFO message inside a 
> "if(LOG.isTraceEnabled())" block.
> -
>
> Key: HBASE-23047
> URL: https://issues.apache.org/jira/browse/HBASE-23047
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 2.2.1, 2.1.6
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-23047.master.001.patch, 
> HBASE-23047.master.002.patch
>
>
> Noticed this while analysing another potential checksum issue. Despite doing 
> a check for TRACE level, we log an INFO message inside the if block:
> {noformat}
> if (LOG.isTraceEnabled()) {
>   LOG.info("dataLength=" + buf.capacity() + ", sizeWithHeader=" + 
> onDiskDataSizeWithHeader
>   + ", checksumType=" + ctype.getName() + ", file=" + pathName + ", 
> offset=" + offset
>   + ", headerSize=" + hdrSize + ", bytesPerChecksum=" + 
> bytesPerChecksum);
> }
> {noformat}
> Uploading a patch that logs a TRACE message and switch to parameterising 
> logging. Since there's no extra computation on the param passing, we 
> shouldn't need the extra if either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21856) Consider Causal Replication Ordering

2019-09-19 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933196#comment-16933196
 ] 

Andrew Purtell commented on HBASE-21856:


That's correct. But if we are having issues as described if we introduce a back 
pressure signal that stops the sender, or causes it to (exponentially) back 
off, this will be fine. 

> Consider Causal Replication Ordering
> 
>
> Key: HBASE-21856
> URL: https://issues.apache.org/jira/browse/HBASE-21856
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Replication
>Reporter: Lars Hofhansl
>Priority: Major
>  Labels: Replication
>
> We've had various efforts to improve the ordering guarantees for HBase 
> replication, most notably Serial Replication.
> I think in many cases guaranteeing a Total Replication Order is not required, 
> but a simpler Causal Replication Order is sufficient.
> Specifically we would guarantee causal ordering for a single Rowkey. Any 
> changes to a Row - Puts, Deletes, etc - would be replicated in the exact 
> order in which they occurred in the source system.
> Unlike total ordering this can be accomplished with only local region server 
> control.
> I don't have a full design in mind, let's discuss here. It should be 
> sufficient to to the following:
> # RegionServers only adopt the replication queues from other RegionServers 
> for regions they (now) own. This requires log splitting for replication.
> # RegionServers ship all edits for queues adopted from other servers before 
> any of their "own" edits are shipped.
> It's probably a bit more involved, but should be much cheaper that the total 
> ordering provided by serial replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r326093560
 
 

 ##
 File path: 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.metrics.Meter;
+import org.apache.hadoop.hbase.metrics.MetricRegistry;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation of {@link MetricsTableQueryMeter} to track query per second 
for one table in
+ * a RegionServer.
+ */
+@InterfaceAudience.Private
+public class MetricsTableQueryMeterImpl implements MetricsTableQueryMeter {
+  private final Map metersByTable = new 
ConcurrentHashMap<>();
+  private final MetricRegistry metricRegistry;
+
+  private final static String TABLE_READ_QUERY_PER_SECOND = 
"tableReadQueryPerSecond";
+  private final static String TABLE_WRITE_QUERY_PER_SECOND = 
"tableWriteQueryPerSecond";
+
+  public MetricsTableQueryMeterImpl(MetricRegistry metricRegistry) {
+this.metricRegistry = metricRegistry;
+  }
+
+  private static class TableMeters {
+final Meter tableReadQueryMeter;
+final Meter tableWriteQueryMeter;
+
+TableMeters(MetricRegistry metricRegistry, TableName tableName) {
+  this.tableReadQueryMeter = 
metricRegistry.meter(qualifyMetricsName(tableName,
+TABLE_READ_QUERY_PER_SECOND));
+  this.tableWriteQueryMeter =
+metricRegistry.meter(qualifyMetricsName(tableName, 
TABLE_WRITE_QUERY_PER_SECOND));
+}
+
+public void updateTableReadQueryMeter(long count) {
+  tableReadQueryMeter.mark(count);
+}
+
+public void updateTableReadQueryMeter() {
+  tableReadQueryMeter.mark();
+}
+
+public void updateTableWriteQueryMeter(long count) {
+  tableWriteQueryMeter.mark(count);
+}
+
+public void updateTableWriteQueryMeter() {
+  tableWriteQueryMeter.mark();
+}
+  }
+
+  private static String qualifyMetricsName(TableName tableName, String metric) 
{
+StringBuilder sb = new StringBuilder();
+sb.append("Namespace_").append(tableName.getNamespaceAsString());
+sb.append("_table_").append(tableName.getQualifierAsString());
+sb.append("_metric_").append(metric);
+return sb.toString();
+  }
+
+  private TableMeters getOrCreateTableMeter(String tableName) {
+final TableName tn = TableName.valueOf(tableName);
+return metersByTable.computeIfAbsent(tn, k -> new 
TableMeters(metricRegistry, tn));
 
 Review comment:
   The lambda is confusing. Parameter k is never used.  It should be `k -> new 
TableMeters(metricRegistry, k)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #631: HBASE-23035 Retain region to the last RegionServer make the failover …

2019-09-19 Thread GitBox
Apache-HBase commented on issue #631: HBASE-23035 Retain region to the last 
RegionServer make the failover …
URL: https://github.com/apache/hbase/pull/631#issuecomment-532984000
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   3m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 51s |  master passed  |
   | :green_heart: |  compile  |   0m 58s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 28s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 59s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 36s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 25s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 28s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 56s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m  5s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 213m 21s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 26s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 277m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/631 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 9a19e3107504 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-631/out/precommit/personality/provided.sh
 |
   | git revision | master / 20bfb43db6 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/2/testReport/
 |
   | Max. process+thread count | 4683 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326016251
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 ##
 @@ -288,14 +302,17 @@ public BucketCache(String ioEngineName, long capacity, 
int blockSize, int[] buck
 this.ramCache = new ConcurrentHashMap();
 
 this.backingMap = new ConcurrentHashMap((int) 
blockNumCapacity);
-
-if (ioEngine.isPersistent() && persistencePath != null) {
+if (ioEngine.isPersistent()) {
   try {
-retrieveFromFile(bucketSizes);
+if (persistencePath != null) {
+  retrieveFromFile(bucketSizes);
+} else {
+  ((PersistentIOEngine) ioEngine).deleteCacheDataFile();
 
 Review comment:
   Data file can be large, since it is out-of-date and can't be restore, i'm 
fine with the deletion. Being kept is also good to me, only a matter of 
overwrite again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #361: HBase-22027: Split non-MR related parts of TokenUtil off into a Clien…

2019-09-19 Thread GitBox
Apache-HBase commented on issue #361: HBase-22027: Split non-MR related parts 
of TokenUtil off into a Clien…
URL: https://github.com/apache/hbase/pull/361#issuecomment-533005737
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 4 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 39s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   7m  1s |  master passed  |
   | :green_heart: |  compile  |   1m 37s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 15s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 16s |  master passed  |
   | :blue_heart: |  spotbugs  |   5m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   6m 21s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   6m 27s |  the patch passed  |
   | :green_heart: |  compile  |   1m 35s |  the patch passed  |
   | :green_heart: |  javac  |   1m 35s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 17s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m 46s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  20m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   1m 12s |  the patch passed  |
   | :green_heart: |  findbugs  |   6m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   2m  2s |  hbase-client in the patch passed.  |
   | :green_heart: |  unit  | 157m 44s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   1m 44s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 240m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-361/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/361 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 102d9c34f74e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-361/out/precommit/personality/provided.sh
 |
   | git revision | master / a0e8723b73 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-361/1/testReport/
 |
   | Max. process+thread count | 5160 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-361/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio closed pull request #607: HBASE-23005 Table UI showed exception message when table is disabled

2019-09-19 Thread GitBox
infraio closed pull request #607: HBASE-23005 Table UI showed exception message 
when table is disabled
URL: https://github.com/apache/hbase/pull/607
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on issue #631: HBASE-23035 Retain region to the last RegionServer make the failover …

2019-09-19 Thread GitBox
Apache9 commented on issue #631: HBASE-23035 Retain region to the last 
RegionServer make the failover …
URL: https://github.com/apache/hbase/pull/631#issuecomment-533022760
 
 
   The problem for setting forceNewPlan = true in serverCrashed is that, the 
flag will not be persistent so if there is a master crash then the flag will be 
lost.
   
   But anyway, this is only a nice to have, which means it will not effect 
correctness so maybe it is fine, but we'd better add some comments about this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r326093100
 
 

 ##
 File path: 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.metrics.Meter;
+import org.apache.hadoop.hbase.metrics.MetricRegistry;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation of {@link MetricsTableQueryMeter} to track query per second 
for one table in
+ * a RegionServer.
+ */
+@InterfaceAudience.Private
+public class MetricsTableQueryMeterImpl implements MetricsTableQueryMeter {
+  private final Map metersByTable = new 
ConcurrentHashMap<>();
+  private final MetricRegistry metricRegistry;
+
+  private final static String TABLE_READ_QUERY_PER_SECOND = 
"tableReadQueryPerSecond";
+  private final static String TABLE_WRITE_QUERY_PER_SECOND = 
"tableWriteQueryPerSecond";
+
+  public MetricsTableQueryMeterImpl(MetricRegistry metricRegistry) {
+this.metricRegistry = metricRegistry;
+  }
+
+  private static class TableMeters {
+final Meter tableReadQueryMeter;
+final Meter tableWriteQueryMeter;
+
+TableMeters(MetricRegistry metricRegistry, TableName tableName) {
+  this.tableReadQueryMeter = 
metricRegistry.meter(qualifyMetricsName(tableName,
+TABLE_READ_QUERY_PER_SECOND));
+  this.tableWriteQueryMeter =
+metricRegistry.meter(qualifyMetricsName(tableName, 
TABLE_WRITE_QUERY_PER_SECOND));
+}
+
+public void updateTableReadQueryMeter(long count) {
+  tableReadQueryMeter.mark(count);
+}
+
+public void updateTableReadQueryMeter() {
+  tableReadQueryMeter.mark();
+}
+
+public void updateTableWriteQueryMeter(long count) {
+  tableWriteQueryMeter.mark(count);
+}
+
+public void updateTableWriteQueryMeter() {
+  tableWriteQueryMeter.mark();
+}
+  }
+
+  private static String qualifyMetricsName(TableName tableName, String metric) 
{
+StringBuilder sb = new StringBuilder();
+sb.append("Namespace_").append(tableName.getNamespaceAsString());
+sb.append("_table_").append(tableName.getQualifierAsString());
+sb.append("_metric_").append(metric);
+return sb.toString();
+  }
+
+  private TableMeters getOrCreateTableMeter(String tableName) {
+final TableName tn = TableName.valueOf(tableName);
 
 Review comment:
   The upper layer passes down a `TableName`, but it is converted to `String`, 
here then it is converted back to `TableName`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326015709
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 ##
 @@ -2833,4 +2864,270 @@ private boolean shouldSubmitSCP(ServerName serverName) 
{
 }
 return true;
   }
+
+
+  @Override
+  public GetRSGroupInfoResponse getRSGroupInfo(RpcController controller,
+  GetRSGroupInfoRequest request) throws ServiceException {
+GetRSGroupInfoResponse.Builder builder = 
GetRSGroupInfoResponse.newBuilder();
+String groupName = request.getRSGroupName();
+LOG.info(
+master.getClientIdAuditPrefix() + " initiates rsgroup info retrieval, 
group=" + groupName);
+try {
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().preGetRSGroupInfo(groupName);
+  }
+  RSGroupInfo rsGroupInfo = 
master.getRSRSGroupInfoManager().getRSGroup(groupName);
+  if (rsGroupInfo != null) {
+
builder.setRSGroupInfo(ProtobufUtil.toProtoGroupInfo(fillTables(rsGroupInfo)));
+  }
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().postGetRSGroupInfo(groupName);
+  }
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+return builder.build();
+  }
+
+  @Override
+  public GetRSGroupInfoOfServerResponse getRSGroupInfoOfServer(RpcController 
controller,
+  GetRSGroupInfoOfServerRequest request) throws ServiceException {
+GetRSGroupInfoOfServerResponse.Builder builder = 
GetRSGroupInfoOfServerResponse.newBuilder();
+Address hp = Address.fromParts(request.getServer().getHostName(),
+request.getServer().getPort());
+LOG.info(master.getClientIdAuditPrefix() + " initiates rsgroup info 
retrieval, server=" + hp);
+try {
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().preGetRSGroupInfoOfServer(hp);
+  }
+  RSGroupInfo info = 
master.getRSRSGroupInfoManager().getRSGroupOfServer(hp);
+  if (info != null) {
+
builder.setRSGroupInfo(ProtobufUtil.toProtoGroupInfo(fillTables(info)));
+  }
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().postGetRSGroupInfoOfServer(hp);
+  }
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+return builder.build();
+  }
+
+  private RSGroupInfo fillTables(RSGroupInfo rsGroupInfo) throws IOException {
+return RSGroupUtil.fillTables(rsGroupInfo, 
master.getTableDescriptors().getAll().values());
+  }
+
+  @Override
+  public MoveServersResponse moveServers(RpcController controller, 
MoveServersRequest request)
+  throws ServiceException {
+Set hostPorts = Sets.newHashSet();
+MoveServersResponse.Builder builder = MoveServersResponse.newBuilder();
+for (HBaseProtos.ServerName el : request.getServersList()) {
+  hostPorts.add(Address.fromParts(el.getHostName(), el.getPort()));
+}
+LOG.info(master.getClientIdAuditPrefix() + " move servers " + hostPorts + 
" to rsgroup " +
+request.getTargetGroup());
+try {
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().preMoveServers(hostPorts, 
request.getTargetGroup());
+  }
+  master.getRSRSGroupInfoManager().moveServers(hostPorts, 
request.getTargetGroup());
+  if (master.getMasterCoprocessorHost() != null) {
+master.getMasterCoprocessorHost().postMoveServers(hostPorts, 
request.getTargetGroup());
+  }
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+return builder.build();
+  }
+
+  @Deprecated
+  @Override
+  public MoveTablesResponse moveTables(RpcController controller, 
MoveTablesRequest request)
+  throws ServiceException {
+return null;
+  }
+
+  @Override
+  public AddRSGroupResponse addRSGroup(RpcController controller, 
AddRSGroupRequest request)
+  throws ServiceException {
+AddRSGroupResponse.Builder builder = AddRSGroupResponse.newBuilder();
+LOG.info(master.getClientIdAuditPrefix() + " add rsgroup " + 
request.getRSGroupName());
+try {
+  if (master.getMasterCoprocessorHost() != null) {
+
master.getMasterCoprocessorHost().preAddRSGroup(request.getRSGroupName());
+  }
+  master.getRSRSGroupInfoManager().addRSGroup(new 
RSGroupInfo(request.getRSGroupName()));
+  if (master.getMasterCoprocessorHost() != null) {
+
master.getMasterCoprocessorHost().postAddRSGroup(request.getRSGroupName());
+  }
+} catch (IOException e) {
+  throw new ServiceException(e);
+}
+return builder.build();
+  }
+
+  @Override
+  public RemoveRSGroupResponse removeRSGroup(RpcController controller, 
RemoveRSGroupRequest request)
+  throws ServiceException {
+

[GitHub] [hbase] ZhaoBQ commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
ZhaoBQ commented on a change in pull request #633: HBASE-22890 Verify the file 
integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326043884
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -68,15 +97,18 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
   // The next setting length will throw exception,logging this message
   // is just used for the detail reason of exception,
   String msg = "Only " + StringUtils.byteDesc(totalSpace)
-  + " total space under " + filePath + ", not enough for requested 
"
-  + StringUtils.byteDesc(sizePerFile);
++ " total space under " + filePath + ", not enough for requested "
++ StringUtils.byteDesc(sizePerFile);
   LOG.warn(msg);
 }
-rafs[i].setLength(sizePerFile);
+File file = new File(filePath);
+if (file.length() != sizePerFile) {
+  rafs[i].setLength(sizePerFile);
+}
 
 Review comment:
   setLength() method will change file's last modified time. So if don't change 
this, the wrong time will be used to calculate checksum.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on issue #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on issue #613: HBASE-22932 Add rs group management methods 
in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#issuecomment-533009766
 
 
   > What about the permission? I think we also need to modify AccessController?
   
   Now permission checking is added in RSGroupAdminEndpoint. I think I should 
move permission checking of RSGroup methods to AccessController. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23046) Remove compatibility case from truncate command

2019-09-19 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-23046:
--
Release Note: Remove backward compatibility from `truncate` and 
`truncate_preserve` shell commands. This means that these commands from HBase 
Clients are not compatible with pre-0.99 HBase clusters.

> Remove compatibility case from truncate command
> ---
>
> Key: HBASE-23046
> URL: https://issues.apache.org/jira/browse/HBASE-23046
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
>
> The truncate and truncate_preserve commands in shell have a compatibility 
> block to handle the case when Master does not have truncate command.
> This was added in HBASE-8332 for HBase 0.99 so it is safe to remove it now.
> The current compatibility block catches DoNotRetryIOException which can hide 
> different kind of errors and just drops and recreates the table.
> {code:ruby}
> begin
>   puts 'Truncating table...'
>   @admin.truncateTable(table_name, false)
> rescue => e
>   # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
>   raise e unless e.respond_to?(:cause) && !e.cause.nil?
>   rootCause = e.cause
>   if rootCause.is_a?(org.apache.hadoop.hbase.DoNotRetryIOException)
> # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
> puts 'Dropping table...'
> @admin.deleteTable(table_name)
> puts 'Creating table...'
> @admin.createTable(table_description)
>   else
> raise e
>   end
> end
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23037) Make the split WAL related log more readable

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933146#comment-16933146
 ] 

Hudson commented on HBASE-23037:


Results for branch branch-2
[build #2276 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//console].


> Make the split WAL related log more readable
> 
>
> Key: HBASE-23037
> URL: https://issues.apache.org/jira/browse/HBASE-23037
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23044) CatalogJanitor#cleanMergeQualifier may clean wrong parent regions

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933147#comment-16933147
 ] 

Hudson commented on HBASE-23044:


Results for branch branch-2
[build #2276 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2276//console].


> CatalogJanitor#cleanMergeQualifier may clean wrong parent regions
> -
>
> Key: HBASE-23044
> URL: https://issues.apache.org/jira/browse/HBASE-23044
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.6, 2.2.1, 2.1.6
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> 2019-09-17,19:42:40,539 INFO [PEWorker-1] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223589, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[81b6fc3c560a00692bc7c3cd266a626a], 
> [472500358997b0dc8f0002ec86593dcf] in 2.6470sec
> 2019-09-17,19:59:54,179 INFO [PEWorker-6] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223651, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[9c52f24e0a9cc9b4959c1ebdfea29d64], 
> [a623f298870df5581bcfae7f83311b33] in 1.0340sec
> The child is same region {color:red}647600d28633bb2fe06b40682bab0593{color} 
> but the parent regions are different.
> MergeTableRegionProcedure#prepareMergeRegion will try to cleanMergeQualifier 
> for the regions to merge.
> {code:java}
> for (RegionInfo ri: this.regionsToMerge) {
>   if (!catalogJanitor.cleanMergeQualifier(ri)) {
> String msg = "Skip merging " + 
> RegionInfo.getShortNameToLog(regionsToMerge) +
> ", because parent " + RegionInfo.getShortNameToLog(ri) + " has a 
> merge qualifier";
> LOG.warn(msg);
> throw new MergeRegionException(msg);
>   }
> {code}
> If region A and B merge to C, region D and E merge to F. When merge C and F, 
> it will try to cleanMergeQualifier for C and F. 
> catalogJanitor.cleanMergeQualifier for region C succeed but 
> catalogJanitor.cleanMergeQualifier for region F failed as there are 
> references in region F.
> When merge C and F again, it will try to cleanMergeQualifier for C and F 
> again. But MetaTableAccessor.getMergeRegions will get wrong parents now. It 
> use scan with filter to scan result. But region C's MergeQualifier already 
> was deleted before. Then the scan will return a wrong result, may be anther 
> region..
> {code:java}
> public boolean cleanMergeQualifier(final RegionInfo region) throws 
> IOException {
> // Get merge regions if it is a merged region and already has merge 
> qualifier
> List parents = 
> MetaTableAccessor.getMergeRegions(this.services.getConnection(),
> region.getRegionName());
> if (parents == null || parents.isEmpty()) {
>   // It doesn't have merge qualifier, no need to clean
>   return true;
> }
> return cleanMergeRegion(region, parents);
>   }
> public static List getMergeRegions(Connection connection, byte[] 
> regionName)
>   throws IOException {
> return getMergeRegions(getMergeRegionsRaw(connection, regionName));
>   }
> private static Cell [] getMergeRegionsRaw(Connection connection, byte [] 
> regionName)
>   throws IOException {
> Scan scan = new Scan().withStartRow(regionName).
> setOneRowLimit().
> readVersions(1).
> addFamily(HConstants.CATALOG_FAMILY).
> setFilter(new QualifierFilter(CompareOperator.EQUAL,
>   new RegexStringComparator(HConstants.MERGE_QUALIFIER_PREFIX_STR+ 
> ".*")));
> try (Table m = getMetaHTable(connection); ResultScanner scanner = 
> m.getScanner(scan)) {
>   // Should be only one result in this scanner if 

[GitHub] [hbase] infraio commented on issue #631: HBASE-23035 Retain region to the last RegionServer make the failover …

2019-09-19 Thread GitBox
infraio commented on issue #631: HBASE-23035 Retain region to the last 
RegionServer make the failover …
URL: https://github.com/apache/hbase/pull/631#issuecomment-533019507
 
 
   > IIRC there is way to interrupt an existing TRSP from SCP, I think in that 
method we should also do the same thing?
   
   Set forceNewPlan = true too when call serverCrashed. The TRSP may in any 
state when call this. 
   1. If in REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, forceNewPlan is true 
then it will choose a round-robin server.
   2. If in REGION_STATE_TRANSITION_OPEN, regionClosedAbnormally will set 
region location to null then it will go back to 
REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE.
   3. If in REGION_STATE_TRANSITION_CONFIRM_OPENED, it is same to failed open 
region.
   4. If in REGION_STATE_TRANSITION_CONFIRM_CLOSED, it will go back to 
REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE.
   5. If in REGION_STATE_TRANSITION_CLOSE, it will go back to 
REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23051) Remove unneeded Mockito.mock invocations

2019-09-19 Thread Peter Somogyi (Jira)
Peter Somogyi created HBASE-23051:
-

 Summary: Remove unneeded Mockito.mock invocations
 Key: HBASE-23051
 URL: https://issues.apache.org/jira/browse/HBASE-23051
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.3.0, 2.1.7, 2.2.2
Reporter: Peter Somogyi
Assignee: Peter Somogyi
 Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2


ErrorProne fails the build using the new Mockito version.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure: 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/HBase_Nightly_master/component/hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/MockMasterServices.java:[147,16]
 error: [CheckReturnValue] Ignored return value of method that is annotated 
with @CheckReturnValue
[ERROR] (see https://errorprone.info/bugpattern/CheckReturnValue)
[ERROR]   Did you mean to remove this line?
[ERROR] 
/home/jenkins/jenkins-slave/workspace/HBase_Nightly_master/component/hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java:[331,16]
 error: [CheckReturnValue] Ignored return value of method that is annotated 
with @CheckReturnValue
[ERROR] (see https://errorprone.info/bugpattern/CheckReturnValue)
[ERROR]   Did you mean to remove this line?
[ERROR] -> [Help 1] {noformat}
[https://builds.apache.org/job/HBase%20Nightly/job/master/1456/artifact/output-jdk8-hadoop2/patch-compile-root.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] petersomogyi opened a new pull request #641: HBASE-23051 Remove unneeded Mockito.mock invocations

2019-09-19 Thread GitBox
petersomogyi opened a new pull request #641: HBASE-23051 Remove unneeded 
Mockito.mock invocations
URL: https://github.com/apache/hbase/pull/641
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326017999
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminClient.java
 ##
 @@ -38,34 +58,858 @@
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoRequest;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.GetRSGroupInfoResponse;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.ListRSGroupInfosRequest;
-import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersAndTablesRequest;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveServersRequest;
-import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveTablesRequest;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RSGroupAdminService;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RemoveRSGroupRequest;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RemoveServersRequest;
+import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.SetRSGroupForTablesRequest;
 import org.apache.hadoop.hbase.protobuf.generated.RSGroupProtos;
+import org.apache.hadoop.hbase.quotas.QuotaFilter;
+import org.apache.hadoop.hbase.quotas.QuotaSettings;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshotView;
+import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
+import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
+import org.apache.hadoop.hbase.replication.ReplicationPeerDescription;
+import org.apache.hadoop.hbase.replication.SyncReplicationState;
+import org.apache.hadoop.hbase.security.access.GetUserPermissionsRequest;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.access.UserPermission;
+import org.apache.hadoop.hbase.snapshot.HBaseSnapshotException;
+import org.apache.hadoop.hbase.snapshot.RestoreSnapshotException;
+import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
+import org.apache.hadoop.hbase.snapshot.UnknownSnapshotException;
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.collect.Sets;
 import org.apache.yetus.audience.InterfaceAudience;
+import com.google.protobuf.ServiceException;
+
+/**
+ * Client used for managing region server group information.
+ */
+@InterfaceAudience.Private
 
 Review comment:
   This client should only be used in UTs now, I keep it for testing 
RSGroupAdminService(maybe some users still use this service?). I think after 
adding rsgroup management methods to Admin client, NO users should use this 
client.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] petersomogyi merged pull request #638: HBASE-23046 Remove compatibility case from truncate command

2019-09-19 Thread GitBox
petersomogyi merged pull request #638: HBASE-23046 Remove compatibility case 
from truncate command
URL: https://github.com/apache/hbase/pull/638
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
Apache-HBase commented on issue #613: HBASE-22932 Add rs group management 
methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#issuecomment-533030525
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :blue_heart: |  prototool  |   0m  0s |  prototool was not available.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 11 
new or modified test files.  |
   ||| _ HBASE-22514 Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 14s |  HBASE-22514 passed  |
   | :green_heart: |  compile  |   3m  3s |  HBASE-22514 passed  |
   | :green_heart: |  checkstyle  |   2m 58s |  HBASE-22514 passed  |
   | :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   2m  0s |  HBASE-22514 passed  |
   | :blue_heart: |  spotbugs  |   1m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |  10m 45s |  HBASE-22514 passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   4m 55s |  the patch passed  |
   | :green_heart: |  compile  |   2m 58s |  the patch passed  |
   | :green_heart: |  cc  |   2m 58s |  the patch passed  |
   | :green_heart: |  javac  |   2m 58s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   0m 35s |  hbase-client: The patch 
generated 6 new + 162 unchanged - 0 fixed = 168 total (was 162)  |
   | :broken_heart: |  checkstyle  |   1m 20s |  hbase-server: The patch 
generated 22 new + 87 unchanged - 2 fixed = 109 total (was 89)  |
   | :broken_heart: |  whitespace  |   0m  0s |  The patch has 1 line(s) that 
end in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 29s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  hbaseprotoc  |   2m 56s |  the patch passed  |
   | :broken_heart: |  javadoc  |   0m 35s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | :green_heart: |  findbugs  |  11m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   0m 42s |  hbase-protocol-shaded in the patch 
passed.  |
   | :green_heart: |  unit  |   0m 27s |  hbase-protocol in the patch passed.  |
   | :green_heart: |  unit  |   1m 50s |  hbase-client in the patch passed.  |
   | :green_heart: |  unit  | 164m 36s |  hbase-server in the patch passed.  |
   | :green_heart: |  unit  |   3m 31s |  hbase-thrift in the patch passed.  |
   | :green_heart: |  asflicense  |   2m 36s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 258m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/613 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux eb438c193b1b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-613/out/precommit/personality/provided.sh
 |
   | git revision | HBASE-22514 / b91ef7c9dd |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/artifact/out/diff-javadoc-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-613/6/testReport/
 |
   | Max. process+thread count | 4503 (vs. ulimit of 1) |
   | modules | C: hbase-protocol-shaded 

[jira] [Commented] (HBASE-23047) ChecksumUtil.validateChecksum logs an INFO message inside a "if(LOG.isTraceEnabled())" block.

2019-09-19 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933193#comment-16933193
 ] 

Wellington Chevreuil commented on HBASE-23047:
--

[~psomogyi], I can do it, but it seems compliant with our checkstyle rules. 
Should we review checkstyle rules accordingly, then?

> ChecksumUtil.validateChecksum logs an INFO message inside a 
> "if(LOG.isTraceEnabled())" block.
> -
>
> Key: HBASE-23047
> URL: https://issues.apache.org/jira/browse/HBASE-23047
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 2.2.1, 2.1.6
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-23047.master.001.patch
>
>
> Noticed this while analysing another potential checksum issue. Despite doing 
> a check for TRACE level, we log an INFO message inside the if block:
> {noformat}
> if (LOG.isTraceEnabled()) {
>   LOG.info("dataLength=" + buf.capacity() + ", sizeWithHeader=" + 
> onDiskDataSizeWithHeader
>   + ", checksumType=" + ctype.getName() + ", file=" + pathName + ", 
> offset=" + offset
>   + ", headerSize=" + hdrSize + ", bytesPerChecksum=" + 
> bytesPerChecksum);
> }
> {noformat}
> Uploading a patch that logs a TRACE message and switch to parameterising 
> logging. Since there's no extra computation on the param passing, we 
> shouldn't need the extra if either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #598: HBASE-22142 Space quota: If table inside namespace having space quota is dropped, data size usage is still considered for the drop table.

2019-09-19 Thread GitBox
Apache-HBase commented on issue #598: HBASE-22142 Space quota: If table inside 
namespace having space quota is dropped, data size usage is still considered 
for the drop table.
URL: https://github.com/apache/hbase/pull/598#issuecomment-533054499
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   7m  2s |  master passed  |
   | :green_heart: |  compile  |   1m  8s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 40s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 47s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 41s |  master passed  |
   | :blue_heart: |  spotbugs  |   5m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   5m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 39s |  the patch passed  |
   | :green_heart: |  compile  |   1m  9s |  the patch passed  |
   | :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 45s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  19m 21s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   | :green_heart: |  findbugs  |   5m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 163m 35s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 33s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 235m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/598 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux d00e8bba00b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-598/out/precommit/personality/provided.sh
 |
   | git revision | master / a0e8723b73 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/2/testReport/
 |
   | Max. process+thread count | 4834 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23051) Remove unneeded Mockito.mock invocations

2019-09-19 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-23051:
--
Status: Patch Available  (was: Open)

> Remove unneeded Mockito.mock invocations
> 
>
> Key: HBASE-23051
> URL: https://issues.apache.org/jira/browse/HBASE-23051
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> ErrorProne fails the build using the new Mockito version.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project hbase-server: Compilation failure: 
> Compilation failure: 
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/HBase_Nightly_master/component/hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/MockMasterServices.java:[147,16]
>  error: [CheckReturnValue] Ignored return value of method that is annotated 
> with @CheckReturnValue
> [ERROR] (see https://errorprone.info/bugpattern/CheckReturnValue)
> [ERROR]   Did you mean to remove this line?
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/HBase_Nightly_master/component/hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java:[331,16]
>  error: [CheckReturnValue] Ignored return value of method that is annotated 
> with @CheckReturnValue
> [ERROR] (see https://errorprone.info/bugpattern/CheckReturnValue)
> [ERROR]   Did you mean to remove this line?
> [ERROR] -> [Help 1] {noformat}
> [https://builds.apache.org/job/HBase%20Nightly/job/master/1456/artifact/output-jdk8-hadoop2/patch-compile-root.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22932) Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933106#comment-16933106
 ] 

Xiaolin Ha commented on HBASE-22932:


In master.proto, I'll add methods as follows(also shows deprecated methods from 
RSGroupAdmin.proto for more clearly ):

//rpc GetRSGroupInfo(GetRSGroupInfoRequest)
//returns (GetRSGroupInfoResponse);

//rpc GetRSGroupInfoOfServer(GetRSGroupInfoOfServerRequest)
//returns (GetRSGroupInfoOfServerResponse);

rpc MoveServers(MoveServersRequest)
 returns (MoveServersResponse);

rpc AddRSGroup(AddRSGroupRequest)
 returns (AddRSGroupResponse);

rpc RemoveRSGroup(RemoveRSGroupRequest)
 returns (RemoveRSGroupResponse);

rpc BalanceRSGroup(BalanceRSGroupRequest)
 returns (BalanceRSGroupResponse);

rpc ListRSGroupInfos(ListRSGroupInfosRequest)
 returns (ListRSGroupInfosResponse);

//rpc RemoveServers(RemoveServersRequest)
//returns (RemoveServersResponse);

//rpc GetRSGroupInfoOfTable(GetRSGroupInfoOfTableRequest)
//returns (GetRSGroupInfoOfTableResponse);

//rpc SetRSGroupForTables(SetRSGroupForTablesRequest)
//returns (SetRSGroupForTablesResponse);

 

and in Admin client, keeps methods added in [GitHub Pull Request 
#613|https://github.com/apache/hbase/pull/613]

[~zghaobac] what do you think of it?

 

> Add rs group management methods in Admin and AsyncAdmin
> ---
>
> Key: HBASE-22932
> URL: https://issues.apache.org/jira/browse/HBASE-22932
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Xiaolin Ha
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23032) Upgrade to Curator 4.2.0

2019-09-19 Thread Balazs Meszaros (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros reassigned HBASE-23032:
---

Assignee: Balazs Meszaros

> Upgrade to Curator 4.2.0
> 
>
> Key: HBASE-23032
> URL: https://issues.apache.org/jira/browse/HBASE-23032
> Project: HBase
>  Issue Type: Improvement
>Reporter: Tamas Penzes
>Assignee: Balazs Meszaros
>Priority: Major
>
> Curator 4.0 is quite old, it's time to jump to 4.2.0.
> We should do it in hbase-connectors and hbase-filesystem too.
> [http://curator.apache.org/zk-compatibility.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-21856) Consider Causal Replication Ordering

2019-09-19 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933196#comment-16933196
 ] 

Andrew Purtell edited comment on HBASE-21856 at 9/19/19 9:23 AM:
-

The sender is going to be providing batches in sequence to each consistent 
target. These batches may arrive out of order but the window will be small by 
definition because RPCs are in flight when the ordering can be indeterminate. 

But if we are having issues as described if we introduce a back pressure signal 
or retransmit indication that stops the sender, or causes it to (exponentially) 
back off, or causes it to resend a specific batch, this could be explored - but 
only if actually needed


was (Author: apurtell):
That's correct. But if we are having issues as described if we introduce a back 
pressure signal that stops the sender, or causes it to (exponentially) back 
off, this will be fine. 

> Consider Causal Replication Ordering
> 
>
> Key: HBASE-21856
> URL: https://issues.apache.org/jira/browse/HBASE-21856
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Replication
>Reporter: Lars Hofhansl
>Priority: Major
>  Labels: Replication
>
> We've had various efforts to improve the ordering guarantees for HBase 
> replication, most notably Serial Replication.
> I think in many cases guaranteeing a Total Replication Order is not required, 
> but a simpler Causal Replication Order is sufficient.
> Specifically we would guarantee causal ordering for a single Rowkey. Any 
> changes to a Row - Puts, Deletes, etc - would be replicated in the exact 
> order in which they occurred in the source system.
> Unlike total ordering this can be accomplished with only local region server 
> control.
> I don't have a full design in mind, let's discuss here. It should be 
> sufficient to to the following:
> # RegionServers only adopt the replication queues from other RegionServers 
> for regions they (now) own. This requires log splitting for replication.
> # RegionServers ship all edits for queues adopted from other servers before 
> any of their "own" edits are shipped.
> It's probably a bit more involved, but should be much cheaper that the total 
> ordering provided by serial replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325989193
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
+
+  private static final PerClientRandomNonceGenerator NONCE_GENERATOR =
+PerClientRandomNonceGenerator.get();
+
+  /**
+   * Construct RegionsRecoveryChore with provided params
+   *
+   * @param stopper When {@link Stoppable#isStopped()} is true, this chore 
will cancel and cleanup
+   * @param configuration The configuration params to be used
+   * @param hMaster HMaster instance to initiate RegionTableRegions
+   */
+  RegionsRecoveryChore(final Stoppable stopper, final Configuration 
configuration,
+  final HMaster hMaster) {
+
+super(REGIONS_RECOVERY_CHORE_NAME, stopper, 
configuration.getInt(REGIONS_RECOVERY_INTERVAL,
+  DEFAULT_REGIONS_RECOVERY_INTERVAL));
+this.hMaster = hMaster;
+this.storeRefCountThreshold = 
configuration.getInt(STORE_REF_COUNT_THRESHOLD,
+  DEFAULT_STORE_REF_COUNT_THRESHOLD);
+
+  }
+
+  @Override
+  protected void chore() {
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Starting up Regions Recovery by reopening regions based on 
storeRefCount...");
+}
+try {
+  final ClusterMetrics clusterMetrics = hMaster.getClusterMetrics();
+  final Map serverMetricsMap =
+clusterMetrics.getLiveServerMetrics();
+  final Map> tableToReopenRegionsMap =
+getTableToRegionsByRefCount(serverMetricsMap);
+  if (MapUtils.isNotEmpty(tableToReopenRegionsMap)) {
+tableToReopenRegionsMap.forEach((tableName, regionNames) -> {
+  try {
+LOG.warn("Reopening regions due to high refCount. TableName: {} , 
noOfRegions: {}",
+  tableName, regionNames.size());
+hMaster.reopenRegions(tableName, regionNames, 
NONCE_GENERATOR.getNonceGroup(),
+  NONCE_GENERATOR.newNonce());
+  } catch (IOException e) {
+LOG.error("{} tableName: {}, regionNames: {}", 
ERROR_REOPEN_REIONS_MSG,
+  tableName, regionNames, e);
+  }
+});
+  }
+} catch (Exception e) {
+  LOG.error("Error while reopening regions based on 

[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325805374
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
+
+  private static final PerClientRandomNonceGenerator NONCE_GENERATOR =
+PerClientRandomNonceGenerator.get();
+
+  /**
+   * Construct RegionsRecoveryChore with provided params
+   *
+   * @param stopper When {@link Stoppable#isStopped()} is true, this chore 
will cancel and cleanup
+   * @param configuration The configuration params to be used
+   * @param hMaster HMaster instance to initiate RegionTableRegions
+   */
+  RegionsRecoveryChore(final Stoppable stopper, final Configuration 
configuration,
+  final HMaster hMaster) {
+
+super(REGIONS_RECOVERY_CHORE_NAME, stopper, 
configuration.getInt(REGIONS_RECOVERY_INTERVAL,
+  DEFAULT_REGIONS_RECOVERY_INTERVAL));
+this.hMaster = hMaster;
+this.storeRefCountThreshold = 
configuration.getInt(STORE_REF_COUNT_THRESHOLD,
+  DEFAULT_STORE_REF_COUNT_THRESHOLD);
+
+  }
+
+  @Override
+  protected void chore() {
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Starting up Regions Recovery by reopening regions based on 
storeRefCount...");
+}
+try {
+  final ClusterMetrics clusterMetrics = hMaster.getClusterMetrics();
+  final Map serverMetricsMap =
+clusterMetrics.getLiveServerMetrics();
+  final Map> tableToReopenRegionsMap =
+getTableToRegionsByRefCount(serverMetricsMap);
+  if (MapUtils.isNotEmpty(tableToReopenRegionsMap)) {
+tableToReopenRegionsMap.forEach((tableName, regionNames) -> {
+  try {
+LOG.warn("Reopening regions due to high refCount. TableName: {} , 
noOfRegions: {}",
+  tableName, regionNames.size());
+hMaster.reopenRegions(tableName, regionNames, 
NONCE_GENERATOR.getNonceGroup(),
+  NONCE_GENERATOR.newNonce());
+  } catch (IOException e) {
+LOG.error("{} tableName: {}, regionNames: {}", 
ERROR_REOPEN_REIONS_MSG,
+  tableName, regionNames, e);
+  }
+});
+  }
+} catch (Exception e) {
+  LOG.error("Error while reopening regions based on 

[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325804022
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
 
 Review comment:
   This var name too


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325769075
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetricsBuilder.java
 ##
 @@ -358,6 +360,9 @@ public String toString() {
   for (RegionMetrics r : getRegionMetrics().values()) {
 storeCount += r.getStoreCount();
 storeFileCount += r.getStoreFileCount();
+int currentStoreRefCount = r.getStoreRefCount();
+storeRefCount += currentStoreRefCount;
+maxStoreRefCount = Math.max(maxStoreRefCount, currentStoreRefCount);
 
 Review comment:
   RegionMetrics should give maxStoreFileRefCount also and that should be 
considered here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325768134
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java
 ##
 @@ -154,4 +154,9 @@ default String getNameAsString() {
* @return the reference count for the stores of this region
*/
   int getStoreRefCount();
+
+  /**
+   * @return the max reference count for any store among all stores of this 
region
 
 Review comment:
   And name to be corrected accordingly in all related places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325804666
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
+
+  private static final PerClientRandomNonceGenerator NONCE_GENERATOR =
+PerClientRandomNonceGenerator.get();
+
+  /**
+   * Construct RegionsRecoveryChore with provided params
+   *
+   * @param stopper When {@link Stoppable#isStopped()} is true, this chore 
will cancel and cleanup
+   * @param configuration The configuration params to be used
+   * @param hMaster HMaster instance to initiate RegionTableRegions
+   */
+  RegionsRecoveryChore(final Stoppable stopper, final Configuration 
configuration,
+  final HMaster hMaster) {
+
+super(REGIONS_RECOVERY_CHORE_NAME, stopper, 
configuration.getInt(REGIONS_RECOVERY_INTERVAL,
+  DEFAULT_REGIONS_RECOVERY_INTERVAL));
+this.hMaster = hMaster;
+this.storeRefCountThreshold = 
configuration.getInt(STORE_REF_COUNT_THRESHOLD,
+  DEFAULT_STORE_REF_COUNT_THRESHOLD);
+
+  }
+
+  @Override
+  protected void chore() {
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Starting up Regions Recovery by reopening regions based on 
storeRefCount...");
 
 Review comment:
   Starting up Regions Recovery chore for reopening ...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325803817
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
 
 Review comment:
   This is store file ref count for recovery. Pls change config name such a way 
to indicate this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325767893
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java
 ##
 @@ -154,4 +154,9 @@ default String getNameAsString() {
* @return the reference count for the stores of this region
*/
   int getStoreRefCount();
+
+  /**
+   * @return the max reference count for any store among all stores of this 
region
 
 Review comment:
   Is this the max ref count on a Store or on a StoreFile? The latter right?  
Pls change the javadoc and name of method accordingly. That will be very clear 
name then.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325805172
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
+
+  private static final PerClientRandomNonceGenerator NONCE_GENERATOR =
+PerClientRandomNonceGenerator.get();
+
+  /**
+   * Construct RegionsRecoveryChore with provided params
+   *
+   * @param stopper When {@link Stoppable#isStopped()} is true, this chore 
will cancel and cleanup
+   * @param configuration The configuration params to be used
+   * @param hMaster HMaster instance to initiate RegionTableRegions
+   */
+  RegionsRecoveryChore(final Stoppable stopper, final Configuration 
configuration,
+  final HMaster hMaster) {
+
+super(REGIONS_RECOVERY_CHORE_NAME, stopper, 
configuration.getInt(REGIONS_RECOVERY_INTERVAL,
+  DEFAULT_REGIONS_RECOVERY_INTERVAL));
+this.hMaster = hMaster;
+this.storeRefCountThreshold = 
configuration.getInt(STORE_REF_COUNT_THRESHOLD,
+  DEFAULT_STORE_REF_COUNT_THRESHOLD);
+
+  }
+
+  @Override
+  protected void chore() {
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Starting up Regions Recovery by reopening regions based on 
storeRefCount...");
+}
+try {
+  final ClusterMetrics clusterMetrics = hMaster.getClusterMetrics();
+  final Map serverMetricsMap =
+clusterMetrics.getLiveServerMetrics();
+  final Map> tableToReopenRegionsMap =
+getTableToRegionsByRefCount(serverMetricsMap);
+  if (MapUtils.isNotEmpty(tableToReopenRegionsMap)) {
+tableToReopenRegionsMap.forEach((tableName, regionNames) -> {
+  try {
+LOG.warn("Reopening regions due to high refCount. TableName: {} , 
noOfRegions: {}",
+  tableName, regionNames.size());
+hMaster.reopenRegions(tableName, regionNames, 
NONCE_GENERATOR.getNonceGroup(),
+  NONCE_GENERATOR.newNonce());
+  } catch (IOException e) {
+LOG.error("{} tableName: {}, regionNames: {}", 
ERROR_REOPEN_REIONS_MSG,
+  tableName, regionNames, e);
+  }
+});
+  }
+} catch (Exception e) {
+  LOG.error("Error while reopening regions based on 

[GitHub] [hbase] anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r325769340
 
 

 ##
 File path: hbase-common/src/main/resources/hbase-default.xml
 ##
 @@ -1901,4 +1901,24 @@ possible configurations would overwhelm and obscure the 
important.
   automatically deleted until it is manually deleted
 
   
+  
+hbase.master.regions.recovery.interval
+120
+
+  Regions Recovery Chore interval in milliseconds.
+  This chore keeps running at this interval to
+  find all regions with high store ref count and
+  reopens them.
+
+  
+  
+hbase.regions.recovery.store.count
+256
+
+  Store Ref Count threshold value considered
 
 Review comment:
   Pls correct this desc accordingly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (HBASE-22932) Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933106#comment-16933106
 ] 

Xiaolin Ha edited comment on HBASE-22932 at 9/19/19 6:51 AM:
-

In master.proto, I'll add methods as follows(also shows deprecated methods from 
RSGroupAdmin.proto for more clearly ):

//rpc GetRSGroupInfo(GetRSGroupInfoRequest)
 //returns (GetRSGroupInfoResponse);

//rpc GetRSGroupInfoOfServer(GetRSGroupInfoOfServerRequest)
 //returns (GetRSGroupInfoOfServerResponse);

rpc MoveServers(MoveServersRequest)
 returns (MoveServersResponse);

rpc AddRSGroup(AddRSGroupRequest)
 returns (AddRSGroupResponse);

rpc RemoveRSGroup(RemoveRSGroupRequest)
 returns (RemoveRSGroupResponse);

rpc BalanceRSGroup(BalanceRSGroupRequest)
 returns (BalanceRSGroupResponse);

rpc ListRSGroupInfos(ListRSGroupInfosRequest)
 returns (ListRSGroupInfosResponse);

//rpc RemoveServers(RemoveServersRequest)
 //returns (RemoveServersResponse);

//rpc GetRSGroupInfoOfTable(GetRSGroupInfoOfTableRequest)
 //returns (GetRSGroupInfoOfTableResponse);

//rpc SetRSGroupForTables(SetRSGroupForTablesRequest)
 //returns (SetRSGroupForTablesResponse);

 

and in Admin client, keep methods added in [GitHub Pull Request 
#613|https://github.com/apache/hbase/pull/613]

[~zghaobac] what do you think of it?

 


was (Author: xiaolin ha):
In master.proto, I'll add methods as follows(also shows deprecated methods from 
RSGroupAdmin.proto for more clearly ):

//rpc GetRSGroupInfo(GetRSGroupInfoRequest)
//returns (GetRSGroupInfoResponse);

//rpc GetRSGroupInfoOfServer(GetRSGroupInfoOfServerRequest)
//returns (GetRSGroupInfoOfServerResponse);

rpc MoveServers(MoveServersRequest)
 returns (MoveServersResponse);

rpc AddRSGroup(AddRSGroupRequest)
 returns (AddRSGroupResponse);

rpc RemoveRSGroup(RemoveRSGroupRequest)
 returns (RemoveRSGroupResponse);

rpc BalanceRSGroup(BalanceRSGroupRequest)
 returns (BalanceRSGroupResponse);

rpc ListRSGroupInfos(ListRSGroupInfosRequest)
 returns (ListRSGroupInfosResponse);

//rpc RemoveServers(RemoveServersRequest)
//returns (RemoveServersResponse);

//rpc GetRSGroupInfoOfTable(GetRSGroupInfoOfTableRequest)
//returns (GetRSGroupInfoOfTableResponse);

//rpc SetRSGroupForTables(SetRSGroupForTablesRequest)
//returns (SetRSGroupForTablesResponse);

 

and in Admin client, keeps methods added in [GitHub Pull Request 
#613|https://github.com/apache/hbase/pull/613]

[~zghaobac] what do you think of it?

 

> Add rs group management methods in Admin and AsyncAdmin
> ---
>
> Key: HBASE-22932
> URL: https://issues.apache.org/jira/browse/HBASE-22932
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Xiaolin Ha
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326013377
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 ##
 @@ -1021,41 +1037,61 @@ void doDrain(final List entries) throws 
InterruptedException {
 
   private void persistToFile() throws IOException {
 assert !cacheEnabled;
-FileOutputStream fos = null;
-ObjectOutputStream oos = null;
-try {
+try (ObjectOutputStream oos = new ObjectOutputStream(
+  new FileOutputStream(persistencePath, false))){
   if (!ioEngine.isPersistent()) {
 throw new IOException("Attempt to persist non-persistent cache 
mappings!");
   }
-  fos = new FileOutputStream(persistencePath, false);
-  oos = new ObjectOutputStream(fos);
+  byte[] checksum = ((PersistentIOEngine) 
ioEngine).calculateChecksum(algorithm);
+  if (checksum != null) {
+oos.write(ProtobufUtil.PB_MAGIC);
+oos.writeInt(checksum.length);
+oos.write(checksum);
+  }
   oos.writeLong(cacheCapacity);
   oos.writeUTF(ioEngine.getClass().getName());
   oos.writeUTF(backingMap.getClass().getName());
   oos.writeObject(deserialiserMap);
   oos.writeObject(backingMap);
-} finally {
-  if (oos != null) oos.close();
-  if (fos != null) fos.close();
 }
   }
 
   @SuppressWarnings("unchecked")
-  private void retrieveFromFile(int[] bucketSizes) throws IOException, 
BucketAllocatorException,
+  private void retrieveFromFile(int[] bucketSizes) throws IOException,
   ClassNotFoundException {
 File persistenceFile = new File(persistencePath);
 if (!persistenceFile.exists()) {
   return;
 }
 assert !cacheEnabled;
-FileInputStream fis = null;
 ObjectInputStream ois = null;
 try {
   if (!ioEngine.isPersistent())
 throw new IOException(
 "Attempt to restore non-persistent cache mappings!");
-  fis = new FileInputStream(persistencePath);
-  ois = new ObjectInputStream(fis);
+  ois = new ObjectInputStream(new FileInputStream(persistencePath));
+  int pblen = ProtobufUtil.lengthOfPBMagic();
+  byte[] pbuf = new byte[pblen];
+  int read = ois.read(pbuf);
+  if (read != pblen) {
+throw new IOException("Incorrect number of bytes read while checking 
for protobuf magic "
+  + "number. Requested=" + pblen + ", Received= " + read + ", File=" + 
persistencePath);
+  }
+  if (Bytes.equals(ProtobufUtil.PB_MAGIC, pbuf)) {
+int length = ois.readInt();
+byte[] persistentChecksum = new byte[length];
+int readLen = ois.read(persistentChecksum);
+if (readLen != length || !((PersistentIOEngine) 
ioEngine).verifyFileIntegrity(
+persistentChecksum, algorithm)) {
+  LOG.warn("Can't restore from file because of verification failed.");
 
 Review comment:
   Separate the if branch? `readLen != length` doesn't mean `Can't restore from 
file because of verification failed`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326013479
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -68,15 +97,18 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
   // The next setting length will throw exception,logging this message
   // is just used for the detail reason of exception,
   String msg = "Only " + StringUtils.byteDesc(totalSpace)
-  + " total space under " + filePath + ", not enough for requested 
"
-  + StringUtils.byteDesc(sizePerFile);
++ " total space under " + filePath + ", not enough for requested "
++ StringUtils.byteDesc(sizePerFile);
   LOG.warn(msg);
 }
-rafs[i].setLength(sizePerFile);
+File file = new File(filePath);
+if (file.length() != sizePerFile) {
+  rafs[i].setLength(sizePerFile);
+}
 
 Review comment:
   why this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326013665
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -85,6 +117,17 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
 }
   }
 
+  /**
+   * Delete the cache file and reinitialize the FileIOEngine
+   * @throws IOException the IOException
+   */
+  private void reinit() throws IOException {
+LOG.info("Delete the cache data file and Reinitialize the FileIOEngine.");
 
 Review comment:
   `Reinitialize` lower case for `R`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Apache-HBase commented on issue #633: HBASE-22890 Verify the file integrity in 
persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#issuecomment-533033801
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 
new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | :green_heart: |  mvninstall  |   8m  0s |  branch-1 passed  |
   | :green_heart: |  compile  |   1m 51s |  branch-1 passed with JDK 
v1.8.0_222  |
   | :broken_heart: |  compile  |   0m 22s |  hbase-server in branch-1 failed 
with JDK v1.7.0_232.  |
   | :green_heart: |  checkstyle  |   1m 34s |  branch-1 passed  |
   | :green_heart: |  shadedjars  |   3m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 41s |  branch-1 passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javadoc  |   0m 45s |  branch-1 passed with JDK 
v1.7.0_232  |
   | :blue_heart: |  spotbugs  |   3m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   3m  1s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   2m  8s |  the patch passed  |
   | :green_heart: |  compile  |   1m 44s |  the patch passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javac  |   1m 44s |  the patch passed  |
   | :broken_heart: |  compile  |   0m 22s |  hbase-server in the patch failed 
with JDK v1.7.0_232.  |
   | :broken_heart: |  javac  |   0m 22s |  hbase-server in the patch failed 
with JDK v1.7.0_232.  |
   | :broken_heart: |  checkstyle  |   1m 29s |  hbase-server: The patch 
generated 1 new + 48 unchanged - 4 fixed = 49 total (was 52)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   2m 41s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |   5m 11s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK 
v1.7.0_232  |
   | :green_heart: |  findbugs  |   2m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 118m 26s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 34s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 160m 17s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.replication.TestReplicationChangingPeerRegionservers |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/633 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 914bf8976aeb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-633/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 5c4d8e0 |
   | Default Java | 1.7.0_232 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_222 
/usr/lib/jvm/zulu-7-amd64:1.7.0_232 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/branch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/patch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/patch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/5/testReport/
 |
   | Max. process+thread count | 4236 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 

[jira] [Commented] (HBASE-23041) Should not show split parent regions in HBCK report's unknown server part

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933202#comment-16933202
 ] 

Hudson commented on HBASE-23041:


Results for branch master
[build #1457 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1457/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1457//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1457//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1457//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1457//console].


> Should not show split parent regions in HBCK report's unknown server part
> -
>
> Key: HBASE-23041
> URL: https://issues.apache.org/jira/browse/HBASE-23041
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326015028
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
 ##
 @@ -3864,23 +3886,130 @@ private void getProcedureResult(long procId, 
CompletableFuture future, int
   @Override
   public CompletableFuture snapshotCleanupSwitch(final boolean on,
   final boolean sync) {
-return this.newMasterCaller()
-.action((controller, stub) -> this
-.call(controller, stub,
-RequestConverter.buildSetSnapshotCleanupRequest(on, sync),
-MasterService.Interface::switchSnapshotCleanup,
-SetSnapshotCleanupResponse::getPrevSnapshotCleanup))
-.call();
+return this.newMasterCaller().action((controller, stub) -> this
+.call(controller, stub, 
RequestConverter.buildSetSnapshotCleanupRequest(on, sync),
+MasterService.Interface::switchSnapshotCleanup,
+SetSnapshotCleanupResponse::getPrevSnapshotCleanup)).call();
   }
 
   @Override
   public CompletableFuture isSnapshotCleanupEnabled() {
-return this.newMasterCaller()
+return this.newMasterCaller().action((controller, stub) -> this
+.call(controller, stub, 
RequestConverter.buildIsSnapshotCleanupEnabledRequest(),
+MasterService.Interface::isSnapshotCleanupEnabled,
+IsSnapshotCleanupEnabledResponse::getEnabled)).call();
+  }
+
+  @Override
+  public CompletableFuture getRSGroupInfo(String groupName) {
+return this. newMasterCaller()
+.action(((controller, stub) -> this.
+ 
call(controller, stub,
+RequestConverter.buildGetRSGroupInfoRequest(groupName),
+  (s, c, req, done) -> s.getRSGroupInfo(c, req, done),
+  resp -> resp.hasRSGroupInfo() ?
+  ProtobufUtil.toGroupInfo(resp.getRSGroupInfo()) : null)))
+.call();
+  }
+
+  @Override
+  public CompletableFuture moveServers(Set servers, String 
targetGroup) {
+return this. newMasterCaller()
+.action((controller, stub) -> this.
+ call(controller, 
stub,
+RequestConverter.buildMoveServersRequest(servers, targetGroup),
+  (s, c, req, done) -> s.moveServers(c, req, done), resp -> null))
+.call();
+  }
+
+  @Override
+  public CompletableFuture addRSGroup(String groupName) {
+return this. newMasterCaller()
+.action(((controller, stub) -> this.
+ call(controller, 
stub,
+
AddRSGroupRequest.newBuilder().setRSGroupName(groupName).build(),
+  (s, c, req, done) -> s.addRSGroup(c, req, done), resp -> null)))
+.call();
+  }
+
+  @Override
+  public CompletableFuture removeRSGroup(String groupName) {
+return this. newMasterCaller()
+.action((controller, stub) -> this.
+ 
call(controller, stub,
+
RemoveRSGroupRequest.newBuilder().setRSGroupName(groupName).build(),
+  (s, c, req, done) -> s.removeRSGroup(c, req, done), resp -> 
null))
+.call();
+  }
+
+  @Override
+  public CompletableFuture balanceRSGroup(String groupName) {
+return this. newMasterCaller()
+.action((controller, stub) -> this.
+ 
call(controller, stub,
+
BalanceRSGroupRequest.newBuilder().setRSGroupName(groupName).build(),
+  (s, c, req, done) -> s.balanceRSGroup(c, req, done),
+resp -> resp.getBalanceRan()))
+.call();
+  }
+
+  @Override
+  public CompletableFuture> listRSGroups() {
+return this.> newMasterCaller()
 .action((controller, stub) -> this
-.call(controller, stub,
-RequestConverter.buildIsSnapshotCleanupEnabledRequest(),
-MasterService.Interface::isSnapshotCleanupEnabled,
-IsSnapshotCleanupEnabledResponse::getEnabled))
+.> call(controller,
+stub, ListRSGroupInfosRequest.getDefaultInstance(),
+(s, c, req, done) -> s.listRSGroupInfos(c, req, done),
+resp -> resp.getRSGroupInfoList().stream()
+.map(r -> ProtobufUtil.toGroupInfo(r))
+.collect(Collectors.toList(
+.call();
+  }
+
+  @Override
+  public CompletableFuture getRSGroupOfServer(Address hostPort) {
+return this. newMasterCaller()
+.action((controller, stub) -> this.
+ call(
+controller, stub,
+   RequestConverter.buildGetRSGroupInfoOfServerRequest(hostPort),
+  (s, c, req, done) -> s.getRSGroupInfoOfServer(c, req, done),
+  resp -> resp.hasRSGroupInfo() ?
+  ProtobufUtil.toGroupInfo(resp.getRSGroupInfo()) : null))
+.call();
+  }
+
+  @Override
+  public CompletableFuture removeServers(Set servers) {
+ 

[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326014823
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
 ##
 @@ -1506,5 +1508,63 @@
*   The return value will be wrapped by a {@link CompletableFuture}.
*/
   CompletableFuture isSnapshotCleanupEnabled();
+  
+  /**
+   * Gets group info for the given group name
+   * @param groupName the group name
+   * @return group info
+   */
+  CompletableFuture getRSGroupInfo(String groupName);
+
+  /**
+   * Move given set of servers to the specified target RegionServer group
+   * @param servers set of servers to move
+   * @param targetGroup the group to move servers to
+   */
+  CompletableFuture moveServers(Set servers, String 
targetGroup);
+
+  /**
+   * Creates a new RegionServer group with the given name
+   * @param groupName the name of the group
+   */
+  CompletableFuture addRSGroup(String groupName);
+
+  /**
+   * Removes RegionServer group associated with the given name
+   * @param groupName the group name
+   */
+  CompletableFuture removeRSGroup(String groupName);
+
+  /**
+   * Balance regions in the given RegionServer group
+   * @param groupName the group name
+   * @return boolean Whether balance ran or not
+   */
+  CompletableFuture balanceRSGroup(String groupName);
+
+  /**
+   * Lists current set of RegionServer groups
+   */
+  CompletableFuture> listRSGroups();
+
+  /**
+   * Retrieve the RSGroupInfo a server is affiliated to
+   * @param hostPort HostPort to get RSGroupInfo for
+   */
+  CompletableFuture getRSGroupOfServer(Address 
hostPort);
+
+  /**
+   * Remove decommissioned servers from group
+   * 1. Sometimes we may find the server aborted due to some hardware failure 
and we must offline
 
 Review comment:
   OK.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group management methods in Admin and AsyncAdmin

2019-09-19 Thread GitBox
sunhelly commented on a change in pull request #613: HBASE-22932 Add rs group 
management methods in Admin and AsyncAdmin
URL: https://github.com/apache/hbase/pull/613#discussion_r326014531
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
 ##
 @@ -1506,5 +1508,63 @@
*   The return value will be wrapped by a {@link CompletableFuture}.
*/
   CompletableFuture isSnapshotCleanupEnabled();
+  
+  /**
+   * Gets group info for the given group name
+   * @param groupName the group name
+   * @return group info
+   */
+  CompletableFuture getRSGroupInfo(String groupName);
 
 Review comment:
   Yes, I'll change it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326014760
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -267,6 +310,76 @@ void refreshFileConnection(int accessFileNum, IOException 
ioe) throws IOExceptio
 }
   }
 
+  /**
+   * Delete bucketcache files
+   */
+  @Override
+  public void deleteCacheDataFile() {
+if (filePaths == null) {
+  return;
+}
+for (String file : filePaths) {
+  new File(file).delete();
+}
+  }
+
+  @Override
+  public byte[] calculateChecksum(String algorithm) throws IOException{
+if (filePaths == null) {
+  return null;
+}
+StringBuilder sb = new StringBuilder();
+for (String filePath : filePaths){
+  File file = new File(filePath);
+  if (file.exists()){
+sb.append(filePath);
+sb.append(getFileSize(filePath));
+sb.append(file.lastModified());
+  } else {
+throw new IOException("Cache file: " + filePath + " is not exists.");
 
 Review comment:
   In fact, after FileIOE inited, the file path must be exist anyway. I know's 
it's the complaining in code check, so can we just swallow this IOE in this 
method, instead of throw to upper caller?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22975) Add read and write QPS metrics at server level and table level

2019-09-19 Thread zbq.dean (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933176#comment-16933176
 ] 

zbq.dean commented on HBASE-22975:
--

hbase-15518 does not contain this features. It just contains some table level 
metrics, don't have QPS metrics. [~javaman_chen]

> Add read and write QPS metrics at server level and table level
> --
>
> Key: HBASE-22975
> URL: https://issues.apache.org/jira/browse/HBASE-22975
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.2.0, 1.4.10, master
>Reporter: zbq.dean
>Priority: Major
> Attachments: readQPS.png, writeQPS.png
>
>
> Use HBase‘s existing class DropwizardMeter to collect read and write QPS. The 
> collected location is the same as metrics readRequestsCount and 
> writeRequestsCount.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] meszibalu opened a new pull request #642: HBASE-23032 Upgrade to Curator 4.2.0

2019-09-19 Thread GitBox
meszibalu opened a new pull request #642: HBASE-23032 Upgrade to Curator 4.2.0
URL: https://github.com/apache/hbase/pull/642
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r32604
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerTableMetrics.java
 ##
 @@ -28,9 +29,12 @@
 public class RegionServerTableMetrics {
 
   private final MetricsTableLatencies latencies;
+  private MetricsTableQueryMeter queryMeter;
 
 Review comment:
   Can be final?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r326096145
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
 ##
 @@ -6201,7 +6205,10 @@ String prepareBulkLoad(byte[] family, String srcPath, 
boolean copyFile)
 boolean isSuccessful = false;
 try {
   this.writeRequestsCount.increment();
-
+  if (rsServices != null && rsServices.getMetrics() != null) {
+rsServices.getMetrics().updateWriteQueryMeter(this.htableDescriptor.
+  getTableName());
+  }
 
 Review comment:
   The write meter update should place after write execution, it is different 
from `writeRequestsCount` because execution time matters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #639: HBASE-23049 TableDescriptors#getAll should return the tables ordering…

2019-09-19 Thread GitBox
Apache-HBase commented on issue #639: HBASE-23049 TableDescriptors#getAll 
should return the tables ordering…
URL: https://github.com/apache/hbase/pull/639#issuecomment-533080344
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 33s |  master passed  |
   | :green_heart: |  compile  |   0m 59s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 20s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 38s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 38s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  4s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m  7s |  the patch passed  |
   | :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 19s |  hbase-server: The patch 
generated 0 new + 58 unchanged - 1 fixed = 58 total (was 59)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 163m 46s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 221m 37s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-639/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/639 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 9939dd3a0520 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-639/out/precommit/personality/provided.sh
 |
   | git revision | master / a0e8723b73 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-639/2/testReport/
 |
   | Max. process+thread count | 4464 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-639/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
anoopsjohn commented on a change in pull request #633: HBASE-22890 Verify the 
file integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326120074
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -68,15 +97,18 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
   // The next setting length will throw exception,logging this message
   // is just used for the detail reason of exception,
   String msg = "Only " + StringUtils.byteDesc(totalSpace)
-  + " total space under " + filePath + ", not enough for requested 
"
-  + StringUtils.byteDesc(sizePerFile);
++ " total space under " + filePath + ", not enough for requested "
++ StringUtils.byteDesc(sizePerFile);
   LOG.warn(msg);
 }
-rafs[i].setLength(sizePerFile);
+File file = new File(filePath);
+if (file.length() != sizePerFile) {
+  rafs[i].setLength(sizePerFile);
+}
 
 Review comment:
   Ya some fat comments here would be nice.  Got why u have this check now..  I 
can think of a case though.  Say we have a file based cache with one file and 
size was 10 GB.  Now the restart of the RS happening. The cache is persisted 
also.  Before restart the size is been increased to 20 GB. There is no truncate 
and ideally the cache get rebuilt.  Only thing is after the restart the cache 
capacity is increased.  But now as per the code, the length is changed here and 
so the last modified time and which will fail the verify phase.  Is it some 
thing to be considered?  Dont want much complex handling for this.  Might not 
be a common case for persisted cache.  Max what happening is we not able to 
retrieve persisted cache.  But welcoming thinking/suggestion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22939) SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on.

2019-09-19 Thread Yiran Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HBASE-22939:
-
Attachment: HBASE-22939_branch-2.x.patch

> SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned 
> on.
> -
>
> Key: HBASE-22939
> URL: https://issues.apache.org/jira/browse/HBASE-22939
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22939-v0.patch, HBASE-22939-v1.patch, 
> HBASE-22939-v2.patch, HBASE-22939_branch-2.patch, HBASE-22939_branch-2.x.patch
>
>
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
>   at 
> org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95)
>   at 
> org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2407)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:374)
>   ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22939) SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on.

2019-09-19 Thread Yiran Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated HBASE-22939:
-
Attachment: HBASE-22939_branch-2.patch

> SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned 
> on.
> -
>
> Key: HBASE-22939
> URL: https://issues.apache.org/jira/browse/HBASE-22939
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22939-v0.patch, HBASE-22939-v1.patch, 
> HBASE-22939-v2.patch, HBASE-22939_branch-2.patch, HBASE-22939_branch-2.x.patch
>
>
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
>   at 
> org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95)
>   at 
> org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2407)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:374)
>   ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23046) Remove compatibility case from truncate command

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933404#comment-16933404
 ] 

Hudson commented on HBASE-23046:


Results for branch master
[build #1458 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1458/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1458//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1458//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1458//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1458//console].


> Remove compatibility case from truncate command
> ---
>
> Key: HBASE-23046
> URL: https://issues.apache.org/jira/browse/HBASE-23046
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>
> The truncate and truncate_preserve commands in shell have a compatibility 
> block to handle the case when Master does not have truncate command.
> This was added in HBASE-8332 for HBase 0.99 so it is safe to remove it now.
> The current compatibility block catches DoNotRetryIOException which can hide 
> different kind of errors and just drops and recreates the table.
> {code:ruby}
> begin
>   puts 'Truncating table...'
>   @admin.truncateTable(table_name, false)
> rescue => e
>   # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
>   raise e unless e.respond_to?(:cause) && !e.cause.nil?
>   rootCause = e.cause
>   if rootCause.is_a?(org.apache.hadoop.hbase.DoNotRetryIOException)
> # Handle the compatibility case, where the truncate method doesn't exists 
> on the Master
> puts 'Dropping table...'
> @admin.deleteTable(table_name)
> puts 'Creating table...'
> @admin.createTable(table_description)
>   else
> raise e
>   end
> end
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23041) Should not show split parent regions in HBCK report's unknown server part

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933263#comment-16933263
 ] 

Hudson commented on HBASE-23041:


Results for branch branch-2.1
[build #1616 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1616/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1616//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1616//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1616//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1616//console].


> Should not show split parent regions in HBCK report's unknown server part
> -
>
> Key: HBASE-23041
> URL: https://issues.apache.org/jira/browse/HBASE-23041
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #641: HBASE-23051 Remove unneeded Mockito.mock invocations

2019-09-19 Thread GitBox
Apache-HBase commented on issue #641: HBASE-23051 Remove unneeded Mockito.mock 
invocations
URL: https://github.com/apache/hbase/pull/641#issuecomment-533096373
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m  8s |  master passed  |
   | :green_heart: |  compile  |   0m 59s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 28s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 39s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 29s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 36s |  the patch passed  |
   | :green_heart: |  compile  |   1m  0s |  the patch passed  |
   | :green_heart: |  javac  |   1m  0s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   1m 30s |  hbase-server: The patch 
generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m  1s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m 24s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 232m 27s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 36s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 295m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-641/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/641 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux ca5a83de5b3b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-641/out/precommit/personality/provided.sh
 |
   | git revision | master / a0e8723b73 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-641/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-641/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-641/1/testReport/
 |
   | Max. process+thread count | 5181 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-641/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-connectors] asf-ci commented on issue #43: HBASE-23032 Upgrade to Curator 4.2.0

2019-09-19 Thread GitBox
asf-ci commented on issue #43: HBASE-23032 Upgrade to Curator 4.2.0
URL: https://github.com/apache/hbase-connectors/pull/43#issuecomment-533096041
 
 
   
   Refer to this link for build results (access rights to CI server needed): 
   https://builds.apache.org/job/PreCommit-HBASE-CONNECTORS-Build/71/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22927) Upgrade mockito version for Java 11 compatibility

2019-09-19 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933327#comment-16933327
 ] 

Peter Somogyi commented on HBASE-22927:
---

All, check the Hudson comments above. General check runs the build with 
-Prelease flag and probably GitHub PR build does not.

PR with a fix is linked here.

> Upgrade mockito version for Java 11 compatibility
> -
>
> Key: HBASE-22927
> URL: https://issues.apache.org/jira/browse/HBASE-22927
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sakthi
>Assignee: Rabi Kumar K C
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> Pasting the discussion from HBASE-22534 here:
> "Currently mockito-core version is at 2.1.0. According to 
> [https://github.com/mockito/mockito/blob/release/2.x/doc/release-notes/official.md],
>  looks like Java 11 compatibility was introduced in 2.19+. And 2.23.2 claims 
> to have full java 11 support after byte-buddy fix etc."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22939) SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on.

2019-09-19 Thread Yiran Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933350#comment-16933350
 ] 

Yiran Wu commented on HBASE-22939:
--

Add new patch for branch-2 and reopen this issue.

 [^HBASE-22939_branch-2.patch]   for branch-2
 [^HBASE-22939_branch-2.x.patch] for branch-2.1  and branch-2.2

> SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned 
> on.
> -
>
> Key: HBASE-22939
> URL: https://issues.apache.org/jira/browse/HBASE-22939
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22939-v0.patch, HBASE-22939-v1.patch, 
> HBASE-22939-v2.patch, HBASE-22939_branch-2.patch, HBASE-22939_branch-2.x.patch
>
>
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
>   at 
> org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95)
>   at 
> org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2407)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:374)
>   ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HBASE-22939) SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on.

2019-09-19 Thread Yiran Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu reopened HBASE-22939:
--

> SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned 
> on.
> -
>
> Key: HBASE-22939
> URL: https://issues.apache.org/jira/browse/HBASE-22939
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22939-v0.patch, HBASE-22939-v1.patch, 
> HBASE-22939-v2.patch, HBASE-22939_branch-2.patch, HBASE-22939_branch-2.x.patch
>
>
> {code:java}
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d,
>  expected: hdfs://snake
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
>   at 
> org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95)
>   at 
> org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2407)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:374)
>   ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #631: HBASE-23035 Retain region to the last RegionServer make the failover …

2019-09-19 Thread GitBox
Apache-HBase commented on issue #631: HBASE-23035 Retain region to the last 
RegionServer make the failover …
URL: https://github.com/apache/hbase/pull/631#issuecomment-533128733
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 52s |  master passed  |
   | :green_heart: |  compile  |   1m 13s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 47s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 40s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 43s |  master passed  |
   | :blue_heart: |  spotbugs  |   5m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   7m  0s |  the patch passed  |
   | :green_heart: |  compile  |   1m 17s |  the patch passed  |
   | :green_heart: |  javac  |   1m 17s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 40s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  21m 28s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 46s |  the patch passed  |
   | :green_heart: |  findbugs  |   5m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 259m  9s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 333m 52s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSideWithCoprocessor 
|
   |   | hadoop.hbase.client.TestAdmin2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/631 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 8a94a1937e35 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-631/out/precommit/personality/provided.sh
 |
   | git revision | master / a0e8723b73 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/3/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/3/testReport/
 |
   | Max. process+thread count | 4684 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-631/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22927) Upgrade mockito version for Java 11 compatibility

2019-09-19 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-22927.
---
Resolution: Fixed

Pushed addendum to branch-2.1+. Thanks [~Apache9] for reviewing.

> Upgrade mockito version for Java 11 compatibility
> -
>
> Key: HBASE-22927
> URL: https://issues.apache.org/jira/browse/HBASE-22927
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sakthi
>Assignee: Rabi Kumar K C
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> Pasting the discussion from HBASE-22534 here:
> "Currently mockito-core version is at 2.1.0. According to 
> [https://github.com/mockito/mockito/blob/release/2.x/doc/release-notes/official.md],
>  looks like Java 11 compatibility was introduced in 2.19+. And 2.23.2 claims 
> to have full java 11 support after byte-buddy fix etc."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Reidddddd commented on a change in pull request #615: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-19 Thread GitBox
Reidd commented on a change in pull request #615: HBASE-22975 Add read and 
write QPS metrics at server level and table level
URL: https://github.com/apache/hbase/pull/615#discussion_r326095727
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
 ##
 @@ -6598,6 +6605,9 @@ public boolean nextRaw(List outResults, 
ScannerContext scannerContext)
 // scanner is closed
 throw new UnknownScannerException("Scanner was closed");
   }
+  if (rsServices != null && rsServices.getMetrics() != null) {
+
rsServices.getMetrics().updateReadQueryMeter(getRegionInfo().getTable());
+  }
 
 Review comment:
   Is the place right? The `readRequestsCount` is after outResults fetched.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23041) Should not show split parent regions in HBCK report's unknown server part

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933251#comment-16933251
 ] 

Hudson commented on HBASE-23041:


Results for branch branch-2
[build #2277 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2277/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2277//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2277//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2277//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2277//console].


> Should not show split parent regions in HBCK report's unknown server part
> -
>
> Key: HBASE-23041
> URL: https://issues.apache.org/jira/browse/HBASE-23041
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23049) TableDescriptors#getAll should return the tables ordering by the name which contain namespace

2019-09-19 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-23049.

Fix Version/s: 2.2.2
   2.1.7
   2.3.0
   3.0.0
   Resolution: Fixed

Pushed to branch-2.1+. Thanks [~stack] for reviewing.

> TableDescriptors#getAll should return the tables ordering by the name which 
> contain namespace
> -
>
> Key: HBASE-23049
> URL: https://issues.apache.org/jira/browse/HBASE-23049
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> TableDescriptors#getAll return a TreeMap ordering by 
> TableName#getNameAsString. But if the namespace is "default", 
> TableName#getNameAsString just returns the name which not contain the 
> namespace "default". Should use  TableName#getNameWithNamespaceInclAsString. 
> It will effect the tables order in Tables UI and shell "list" result.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23047) ChecksumUtil.validateChecksum logs an INFO message inside a "if(LOG.isTraceEnabled())" block.

2019-09-19 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933437#comment-16933437
 ] 

HBase QA commented on HBASE-23047:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 0s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
39s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}243m 41s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}309m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/910/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-23047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980696/HBASE-23047.master.002.patch
 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 722ab51c79b3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 08b82c5c8c |
| Default Java | 1.8.0_181 |
| unit | 

[jira] [Reopened] (HBASE-22699) refactor isMetaClearingException

2019-09-19 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi reopened HBASE-22699:
---

Commit message incorrectly used HBASE-22700 and the change broke 
TestAsyncProcess. Let me revert it temporarily.

> refactor isMetaClearingException
> 
>
> Key: HBASE-22699
> URL: https://issues.apache.org/jira/browse/HBASE-22699
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.2
>
>
> It is not so readable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on a change in pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-19 Thread GitBox
virajjasani commented on a change in pull request #600: HBASE-22460 : Reopen 
regions with very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600#discussion_r326124979
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionsRecoveryChore.java
 ##
 @@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.RegionMetrics;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.PerClientRandomNonceGenerator;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+/**
+ * This chore, every time it runs, will try to recover regions with high store 
ref count
+ * by reopening them
+ */
+@InterfaceAudience.Private
+public class RegionsRecoveryChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RegionsRecoveryChore.class);
+
+  private static final String REGIONS_RECOVERY_CHORE_NAME = 
"RegionsRecoveryChore";
+
+  private static final String REGIONS_RECOVERY_INTERVAL =
+"hbase.master.regions.recovery.interval";
+  private static final String STORE_REF_COUNT_THRESHOLD = 
"hbase.regions.recovery.store.count";
+
+  private static final int DEFAULT_REGIONS_RECOVERY_INTERVAL = 1200 * 1000; // 
Default 20 min ?
+  private static final int DEFAULT_STORE_REF_COUNT_THRESHOLD = 256;
+
+  private static final String ERROR_REOPEN_REIONS_MSG =
+"Error reopening regions with high storeRefCount. ";
+
+  private final HMaster hMaster;
+  private final int storeRefCountThreshold;
+
+  private static final PerClientRandomNonceGenerator NONCE_GENERATOR =
+PerClientRandomNonceGenerator.get();
+
+  /**
+   * Construct RegionsRecoveryChore with provided params
+   *
+   * @param stopper When {@link Stoppable#isStopped()} is true, this chore 
will cancel and cleanup
+   * @param configuration The configuration params to be used
+   * @param hMaster HMaster instance to initiate RegionTableRegions
+   */
+  RegionsRecoveryChore(final Stoppable stopper, final Configuration 
configuration,
+  final HMaster hMaster) {
+
+super(REGIONS_RECOVERY_CHORE_NAME, stopper, 
configuration.getInt(REGIONS_RECOVERY_INTERVAL,
+  DEFAULT_REGIONS_RECOVERY_INTERVAL));
+this.hMaster = hMaster;
+this.storeRefCountThreshold = 
configuration.getInt(STORE_REF_COUNT_THRESHOLD,
+  DEFAULT_STORE_REF_COUNT_THRESHOLD);
+
+  }
+
+  @Override
+  protected void chore() {
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Starting up Regions Recovery by reopening regions based on 
storeRefCount...");
+}
+try {
+  final ClusterMetrics clusterMetrics = hMaster.getClusterMetrics();
+  final Map serverMetricsMap =
+clusterMetrics.getLiveServerMetrics();
+  final Map> tableToReopenRegionsMap =
+getTableToRegionsByRefCount(serverMetricsMap);
+  if (MapUtils.isNotEmpty(tableToReopenRegionsMap)) {
+tableToReopenRegionsMap.forEach((tableName, regionNames) -> {
+  try {
+LOG.warn("Reopening regions due to high refCount. TableName: {} , 
noOfRegions: {}",
+  tableName, regionNames.size());
+hMaster.reopenRegions(tableName, regionNames, 
NONCE_GENERATOR.getNonceGroup(),
+  NONCE_GENERATOR.newNonce());
+  } catch (IOException e) {
+LOG.error("{} tableName: {}, regionNames: {}", 
ERROR_REOPEN_REIONS_MSG,
+  tableName, regionNames, e);
+  }
+});
+  }
+} catch (Exception e) {
+  LOG.error("Error while reopening regions based on 

[jira] [Commented] (HBASE-23052) hbase-thirdparty version of GSON that works for branch-1

2019-09-19 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933441#comment-16933441
 ] 

Sean Busbey commented on HBASE-23052:
-

I'm going to put up PRs for each of these so folks can see some practical 
implications. Please don't merge any of them until they've been gone over here.

> hbase-thirdparty version of GSON that works for branch-1
> 
>
> Key: HBASE-23052
> URL: https://issues.apache.org/jira/browse/HBASE-23052
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>
> HBASE-23015 is buttoning up a needed move off of jackson 1 in branches-1. 
> We've already got the implementation work in place to move onto the 
> hbase-thirdparty relocated GSON, but we can't currently build because other 
> dependencies included in the miscellaneous module is JDK8+ only and branch-1 
> needs to work for jdk7.
> couple of options:
> * make the entire hbase-thirdparty repo work with jdk7
> * break out gson from the clearing house miscellaneous module and make *just* 
> the new gson module jdk7 compatible
> * make a jdk7 compatible miscellaneous module and move gson over there (in 
> case we decide to move branch-1 off of other problematic libraries e.g. guava)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23047) ChecksumUtil.validateChecksum logs an INFO message inside a "if(LOG.isTraceEnabled())" block.

2019-09-19 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933286#comment-16933286
 ] 

Peter Somogyi commented on HBASE-23047:
---

+1
{quote}Should we review checkstyle rules accordingly, then?
{quote}
I think both 2 and 4 spaces are allowed. I didn't want to have mixed 2 and 4 
spaces in the first patch for the same command.

> ChecksumUtil.validateChecksum logs an INFO message inside a 
> "if(LOG.isTraceEnabled())" block.
> -
>
> Key: HBASE-23047
> URL: https://issues.apache.org/jira/browse/HBASE-23047
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 2.2.1, 2.1.6
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-23047.master.001.patch, 
> HBASE-23047.master.002.patch
>
>
> Noticed this while analysing another potential checksum issue. Despite doing 
> a check for TRACE level, we log an INFO message inside the if block:
> {noformat}
> if (LOG.isTraceEnabled()) {
>   LOG.info("dataLength=" + buf.capacity() + ", sizeWithHeader=" + 
> onDiskDataSizeWithHeader
>   + ", checksumType=" + ctype.getName() + ", file=" + pathName + ", 
> offset=" + offset
>   + ", headerSize=" + hdrSize + ", bytesPerChecksum=" + 
> bytesPerChecksum);
> }
> {noformat}
> Uploading a patch that logs a TRACE message and switch to parameterising 
> logging. Since there's no extra computation on the param passing, we 
> shouldn't need the extra if either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23041) Should not show split parent regions in HBCK report's unknown server part

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933283#comment-16933283
 ] 

Hudson commented on HBASE-23041:


Results for branch branch-2.2
[build #621 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/621/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/621//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/621//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/621//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/621//console].


> Should not show split parent regions in HBCK report's unknown server part
> -
>
> Key: HBASE-23041
> URL: https://issues.apache.org/jira/browse/HBASE-23041
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
Apache-HBase commented on issue #633: HBASE-22890 Verify the file integrity in 
persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#issuecomment-533088634
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 
new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | :green_heart: |  mvninstall  |   8m  7s |  branch-1 passed  |
   | :green_heart: |  compile  |   2m  7s |  branch-1 passed with JDK 
v1.8.0_222  |
   | :broken_heart: |  compile  |   0m 23s |  hbase-server in branch-1 failed 
with JDK v1.7.0_232.  |
   | :green_heart: |  checkstyle  |   1m 35s |  branch-1 passed  |
   | :green_heart: |  shadedjars  |   2m 51s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 36s |  branch-1 passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javadoc  |   0m 40s |  branch-1 passed with JDK 
v1.7.0_232  |
   | :blue_heart: |  spotbugs  |   2m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   2m 43s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   1m 53s |  the patch passed  |
   | :green_heart: |  compile  |   1m 42s |  the patch passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javac  |   1m 42s |  the patch passed  |
   | :broken_heart: |  compile  |   0m 23s |  hbase-server in the patch failed 
with JDK v1.7.0_232.  |
   | :broken_heart: |  javac  |   0m 23s |  hbase-server in the patch failed 
with JDK v1.7.0_232.  |
   | :broken_heart: |  checkstyle  |   1m 33s |  hbase-server: The patch 
generated 1 new + 48 unchanged - 4 fixed = 49 total (was 52)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   2m 46s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |   4m 59s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | :green_heart: |  javadoc  |   0m 31s |  the patch passed with JDK 
v1.8.0_222  |
   | :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK 
v1.7.0_232  |
   | :green_heart: |  findbugs  |   2m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 116m 56s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 158m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/633 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 07fdd55ccb3f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-633/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 5c4d8e0 |
   | Default Java | 1.7.0_232 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_222 
/usr/lib/jvm/zulu-7-amd64:1.7.0_232 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/artifact/out/branch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/artifact/out/patch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/artifact/out/patch-compile-hbase-server-jdk1.7.0_232.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/testReport/
 |
   | Max. process+thread count | 4321 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-633/6/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[jira] [Commented] (HBASE-22927) Upgrade mockito version for Java 11 compatibility

2019-09-19 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933325#comment-16933325
 ] 

Duo Zhang commented on HBASE-22927:
---

On which branch? Our pre commit didn’ find this problem?

> Upgrade mockito version for Java 11 compatibility
> -
>
> Key: HBASE-22927
> URL: https://issues.apache.org/jira/browse/HBASE-22927
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sakthi
>Assignee: Rabi Kumar K C
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> Pasting the discussion from HBASE-22534 here:
> "Currently mockito-core version is at 2.1.0. According to 
> [https://github.com/mockito/mockito/blob/release/2.x/doc/release-notes/official.md],
>  looks like Java 11 compatibility was introduced in 2.19+. And 2.23.2 claims 
> to have full java 11 support after byte-buddy fix etc."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22700) Incorrect timeout in recommended ZooKeeper configuration

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933403#comment-16933403
 ] 

Hudson commented on HBASE-22700:


Results for branch branch-2
[build #2279 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2279/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2279//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2279//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2279//console].


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2279//console].


> Incorrect timeout in recommended ZooKeeper configuration
> 
>
> Key: HBASE-22700
> URL: https://issues.apache.org/jira/browse/HBASE-22700
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Peter Somogyi
>Assignee: maoling
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6
>
>
> The [Recommended 
> configuration|https://hbase.apache.org/book.html#recommended_configurations.zk]
>  section for ZooKeeper states that the default zookeeper.session.timeout is 3 
> minutes, however, the [default 
> configuration|https://github.com/apache/hbase/blob/master/hbase-common/src/main/resources/hbase-default.xml#L372-L373]
>  is 90 seconds(
>  
> {code:java}
> /** Default value for ZooKeeper session timeout */
> public static final int DEFAULT_ZK_SESSION_TIMEOUT = 90 * 1000;
> ).
> {code}
>  
> This section in the documentation should be modified to reflect the default 
> configuration.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-connectors] tamaashu edited a comment on issue #43: HBASE-23032 Upgrade to Curator 4.2.0

2019-09-19 Thread GitBox
tamaashu edited a comment on issue #43: HBASE-23032 Upgrade to Curator 4.2.0
URL: https://github.com/apache/hbase-connectors/pull/43#issuecomment-533140487
 
 
   I think we have to exclude the ZooKeeper arriving with Curator and Hadoop to 
be able to use the same ZooKeeper as in HBase. See 
https://curator.apache.org/zk-compatibility.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22699) refactor isMetaClearingException

2019-09-19 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933291#comment-16933291
 ] 

Peter Somogyi commented on HBASE-22699:
---

Reverted commits from branch-2.2, branch-2 and master.

[~Joseph295], could you take a look into the failed test in TestAsyncProcess? 
It only affects branch-2 and branch-2.2 but not master.

> refactor isMetaClearingException
> 
>
> Key: HBASE-22699
> URL: https://issues.apache.org/jira/browse/HBASE-22699
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.2
>
>
> It is not so readable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] petersomogyi opened a new pull request #643: HBASE-22927 Upgrade Mockito version for jdk11 - ADDENDUM

2019-09-19 Thread GitBox
petersomogyi opened a new pull request #643: HBASE-22927 Upgrade Mockito 
version for jdk11 - ADDENDUM
URL: https://github.com/apache/hbase/pull/643
 
 
   Use correct version for extra-enforcer-rules


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] petersomogyi commented on issue #643: HBASE-22927 Upgrade Mockito version for jdk11 - ADDENDUM

2019-09-19 Thread GitBox
petersomogyi commented on issue #643: HBASE-22927 Upgrade Mockito version for 
jdk11 - ADDENDUM
URL: https://github.com/apache/hbase/pull/643#issuecomment-533103031
 
 
   Executed `mvn clean install -DskipTests -Prelease`, with this change the run 
was successful.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio merged pull request #639: HBASE-23049 TableDescriptors#getAll should return the tables ordering…

2019-09-19 Thread GitBox
infraio merged pull request #639: HBASE-23049 TableDescriptors#getAll should 
return the tables ordering…
URL: https://github.com/apache/hbase/pull/639
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-connectors] tamaashu commented on issue #43: HBASE-23032 Upgrade to Curator 4.2.0

2019-09-19 Thread GitBox
tamaashu commented on issue #43: HBASE-23032 Upgrade to Curator 4.2.0
URL: https://github.com/apache/hbase-connectors/pull/43#issuecomment-533140487
 
 
   I think we have to exclude the ZooKeeper arriving with Curator to be able to 
use the same ZooKeeper as in HBase. See 
https://curator.apache.org/zk-compatibility.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ZhaoBQ commented on a change in pull request #633: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread GitBox
ZhaoBQ commented on a change in pull request #633: HBASE-22890 Verify the file 
integrity in persistent IOEngine
URL: https://github.com/apache/hbase/pull/633#discussion_r326198728
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
 ##
 @@ -68,15 +97,18 @@ public FileIOEngine(long capacity, String... filePaths) 
throws IOException {
   // The next setting length will throw exception,logging this message
   // is just used for the detail reason of exception,
   String msg = "Only " + StringUtils.byteDesc(totalSpace)
-  + " total space under " + filePath + ", not enough for requested 
"
-  + StringUtils.byteDesc(sizePerFile);
++ " total space under " + filePath + ", not enough for requested "
++ StringUtils.byteDesc(sizePerFile);
   LOG.warn(msg);
 }
-rafs[i].setLength(sizePerFile);
+File file = new File(filePath);
+if (file.length() != sizePerFile) {
+  rafs[i].setLength(sizePerFile);
+}
 
 Review comment:
   If I understand you correctly, you mean change the bucket cache size before 
restart RS. Actually, in `retrieveFromFile()` method will check the bucket 
cache size: 
   `if (capacitySize != cacheCapacity)
   throw new IOException("Mismatched cache capacity:"
   + StringUtils.byteDesc(capacitySize) + ", expected: "
   + StringUtils.byteDesc(cacheCapacity));`
   So change the bucket cache size will no longer retrieve from file. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work started] (HBASE-23052) hbase-thirdparty version of GSON that works for branch-1

2019-09-19 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23052 started by Sean Busbey.
---
> hbase-thirdparty version of GSON that works for branch-1
> 
>
> Key: HBASE-23052
> URL: https://issues.apache.org/jira/browse/HBASE-23052
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>
> HBASE-23015 is buttoning up a needed move off of jackson 1 in branches-1. 
> We've already got the implementation work in place to move onto the 
> hbase-thirdparty relocated GSON, but we can't currently build because other 
> dependencies included in the miscellaneous module is JDK8+ only and branch-1 
> needs to work for jdk7.
> couple of options:
> * make the entire hbase-thirdparty repo work with jdk7
> * break out gson from the clearing house miscellaneous module and make *just* 
> the new gson module jdk7 compatible
> * make a jdk7 compatible miscellaneous module and move gson over there (in 
> case we decide to move branch-1 off of other problematic libraries e.g. guava)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23052) hbase-thirdparty version of GSON that works for branch-1

2019-09-19 Thread Sean Busbey (Jira)
Sean Busbey created HBASE-23052:
---

 Summary: hbase-thirdparty version of GSON that works for branch-1
 Key: HBASE-23052
 URL: https://issues.apache.org/jira/browse/HBASE-23052
 Project: HBase
  Issue Type: Improvement
  Components: dependencies
Reporter: Sean Busbey
Assignee: Sean Busbey


HBASE-23015 is buttoning up a needed move off of jackson 1 in branches-1. We've 
already got the implementation work in place to move onto the hbase-thirdparty 
relocated GSON, but we can't currently build because other dependencies 
included in the miscellaneous module is JDK8+ only and branch-1 needs to work 
for jdk7.

couple of options:

* make the entire hbase-thirdparty repo work with jdk7
* break out gson from the clearing house miscellaneous module and make *just* 
the new gson module jdk7 compatible
* make a jdk7 compatible miscellaneous module and move gson over there (in case 
we decide to move branch-1 off of other problematic libraries e.g. guava)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22700) Incorrect timeout in recommended ZooKeeper configuration

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933634#comment-16933634
 ] 

Hudson commented on HBASE-22700:


Results for branch branch-2.2
[build #622 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/622/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/622//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/622//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/622//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/622//console].


> Incorrect timeout in recommended ZooKeeper configuration
> 
>
> Key: HBASE-22700
> URL: https://issues.apache.org/jira/browse/HBASE-22700
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Peter Somogyi
>Assignee: maoling
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6
>
>
> The [Recommended 
> configuration|https://hbase.apache.org/book.html#recommended_configurations.zk]
>  section for ZooKeeper states that the default zookeeper.session.timeout is 3 
> minutes, however, the [default 
> configuration|https://github.com/apache/hbase/blob/master/hbase-common/src/main/resources/hbase-default.xml#L372-L373]
>  is 90 seconds(
>  
> {code:java}
> /** Default value for ZooKeeper session timeout */
> public static final int DEFAULT_ZK_SESSION_TIMEOUT = 90 * 1000;
> ).
> {code}
>  
> This section in the documentation should be modified to reflect the default 
> configuration.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #361: HBase-22027: Split non-MR related parts of TokenUtil off into a Clien…

2019-09-19 Thread GitBox
saintstack commented on a change in pull request #361: HBase-22027: Split 
non-MR related parts of TokenUtil off into a Clien…
URL: https://github.com/apache/hbase/pull/361#discussion_r326243198
 
 

 ##
 File path: 
hbase-client/src/test/java/org/apache/hadoop/hbase/security/token/TestClientTokenUtil.java
 ##
 @@ -43,19 +42,19 @@
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 
 @Category(SmallTests.class)
-public class TestTokenUtil {
+public class TestClientTokenUtil {
 
   @ClassRule
   public static final HBaseClassTestRule CLASS_RULE =
-HBaseClassTestRule.forClass(TestTokenUtil.class);
+  HBaseClassTestRule.forClass(TestClientTokenUtil.class);
 
 Review comment:
   nit spacing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23044) CatalogJanitor#cleanMergeQualifier may clean wrong parent regions

2019-09-19 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933520#comment-16933520
 ] 

stack commented on HBASE-23044:
---

bq. Yes, may cause data loss.. Introduced by HBASE-22777, only the latest 
release was effected.

I should roll new releases?

> CatalogJanitor#cleanMergeQualifier may clean wrong parent regions
> -
>
> Key: HBASE-23044
> URL: https://issues.apache.org/jira/browse/HBASE-23044
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.6, 2.2.1, 2.1.6
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> 2019-09-17,19:42:40,539 INFO [PEWorker-1] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223589, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[81b6fc3c560a00692bc7c3cd266a626a], 
> [472500358997b0dc8f0002ec86593dcf] in 2.6470sec
> 2019-09-17,19:59:54,179 INFO [PEWorker-6] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223651, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[9c52f24e0a9cc9b4959c1ebdfea29d64], 
> [a623f298870df5581bcfae7f83311b33] in 1.0340sec
> The child is same region {color:red}647600d28633bb2fe06b40682bab0593{color} 
> but the parent regions are different.
> MergeTableRegionProcedure#prepareMergeRegion will try to cleanMergeQualifier 
> for the regions to merge.
> {code:java}
> for (RegionInfo ri: this.regionsToMerge) {
>   if (!catalogJanitor.cleanMergeQualifier(ri)) {
> String msg = "Skip merging " + 
> RegionInfo.getShortNameToLog(regionsToMerge) +
> ", because parent " + RegionInfo.getShortNameToLog(ri) + " has a 
> merge qualifier";
> LOG.warn(msg);
> throw new MergeRegionException(msg);
>   }
> {code}
> If region A and B merge to C, region D and E merge to F. When merge C and F, 
> it will try to cleanMergeQualifier for C and F. 
> catalogJanitor.cleanMergeQualifier for region C succeed but 
> catalogJanitor.cleanMergeQualifier for region F failed as there are 
> references in region F.
> When merge C and F again, it will try to cleanMergeQualifier for C and F 
> again. But MetaTableAccessor.getMergeRegions will get wrong parents now. It 
> use scan with filter to scan result. But region C's MergeQualifier already 
> was deleted before. Then the scan will return a wrong result, may be anther 
> region..
> {code:java}
> public boolean cleanMergeQualifier(final RegionInfo region) throws 
> IOException {
> // Get merge regions if it is a merged region and already has merge 
> qualifier
> List parents = 
> MetaTableAccessor.getMergeRegions(this.services.getConnection(),
> region.getRegionName());
> if (parents == null || parents.isEmpty()) {
>   // It doesn't have merge qualifier, no need to clean
>   return true;
> }
> return cleanMergeRegion(region, parents);
>   }
> public static List getMergeRegions(Connection connection, byte[] 
> regionName)
>   throws IOException {
> return getMergeRegions(getMergeRegionsRaw(connection, regionName));
>   }
> private static Cell [] getMergeRegionsRaw(Connection connection, byte [] 
> regionName)
>   throws IOException {
> Scan scan = new Scan().withStartRow(regionName).
> setOneRowLimit().
> readVersions(1).
> addFamily(HConstants.CATALOG_FAMILY).
> setFilter(new QualifierFilter(CompareOperator.EQUAL,
>   new RegexStringComparator(HConstants.MERGE_QUALIFIER_PREFIX_STR+ 
> ".*")));
> try (Table m = getMetaHTable(connection); ResultScanner scanner = 
> m.getScanner(scan)) {
>   // Should be only one result in this scanner if any.
>   Result result = scanner.next();
>   if (result == null) {
> return null;
>   }
>   // Should be safe to just return all Cells found since we had filter in 
> place.
>   // All values should be RegionInfos or something wrong.
>   return result.rawCells();
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23044) CatalogJanitor#cleanMergeQualifier may clean wrong parent regions

2019-09-19 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933531#comment-16933531
 ] 

stack commented on HBASE-23044:
---

[~zghao] How did you figure this one? It looks painful to figure. Nice fix.

> CatalogJanitor#cleanMergeQualifier may clean wrong parent regions
> -
>
> Key: HBASE-23044
> URL: https://issues.apache.org/jira/browse/HBASE-23044
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.6, 2.2.1, 2.1.6
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> 2019-09-17,19:42:40,539 INFO [PEWorker-1] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223589, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[81b6fc3c560a00692bc7c3cd266a626a], 
> [472500358997b0dc8f0002ec86593dcf] in 2.6470sec
> 2019-09-17,19:59:54,179 INFO [PEWorker-6] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223651, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[9c52f24e0a9cc9b4959c1ebdfea29d64], 
> [a623f298870df5581bcfae7f83311b33] in 1.0340sec
> The child is same region {color:red}647600d28633bb2fe06b40682bab0593{color} 
> but the parent regions are different.
> MergeTableRegionProcedure#prepareMergeRegion will try to cleanMergeQualifier 
> for the regions to merge.
> {code:java}
> for (RegionInfo ri: this.regionsToMerge) {
>   if (!catalogJanitor.cleanMergeQualifier(ri)) {
> String msg = "Skip merging " + 
> RegionInfo.getShortNameToLog(regionsToMerge) +
> ", because parent " + RegionInfo.getShortNameToLog(ri) + " has a 
> merge qualifier";
> LOG.warn(msg);
> throw new MergeRegionException(msg);
>   }
> {code}
> If region A and B merge to C, region D and E merge to F. When merge C and F, 
> it will try to cleanMergeQualifier for C and F. 
> catalogJanitor.cleanMergeQualifier for region C succeed but 
> catalogJanitor.cleanMergeQualifier for region F failed as there are 
> references in region F.
> When merge C and F again, it will try to cleanMergeQualifier for C and F 
> again. But MetaTableAccessor.getMergeRegions will get wrong parents now. It 
> use scan with filter to scan result. But region C's MergeQualifier already 
> was deleted before. Then the scan will return a wrong result, may be anther 
> region..
> {code:java}
> public boolean cleanMergeQualifier(final RegionInfo region) throws 
> IOException {
> // Get merge regions if it is a merged region and already has merge 
> qualifier
> List parents = 
> MetaTableAccessor.getMergeRegions(this.services.getConnection(),
> region.getRegionName());
> if (parents == null || parents.isEmpty()) {
>   // It doesn't have merge qualifier, no need to clean
>   return true;
> }
> return cleanMergeRegion(region, parents);
>   }
> public static List getMergeRegions(Connection connection, byte[] 
> regionName)
>   throws IOException {
> return getMergeRegions(getMergeRegionsRaw(connection, regionName));
>   }
> private static Cell [] getMergeRegionsRaw(Connection connection, byte [] 
> regionName)
>   throws IOException {
> Scan scan = new Scan().withStartRow(regionName).
> setOneRowLimit().
> readVersions(1).
> addFamily(HConstants.CATALOG_FAMILY).
> setFilter(new QualifierFilter(CompareOperator.EQUAL,
>   new RegexStringComparator(HConstants.MERGE_QUALIFIER_PREFIX_STR+ 
> ".*")));
> try (Table m = getMetaHTable(connection); ResultScanner scanner = 
> m.getScanner(scan)) {
>   // Should be only one result in this scanner if any.
>   Result result = scanner.next();
>   if (result == null) {
> return null;
>   }
>   // Should be safe to just return all Cells found since we had filter in 
> place.
>   // All values should be RegionInfos or something wrong.
>   return result.rawCells();
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >