[GitHub] [hbase] virajjasani commented on issue #348: HBASE-22643 : Delete region without archiving only if regiondir is pr…

2019-06-30 Thread GitBox
virajjasani commented on issue #348: HBASE-22643 : Delete region without 
archiving only if regiondir is pr…
URL: https://github.com/apache/hbase/pull/348#issuecomment-507137438
 
 
   test failure with the build is un-relevant to this change.
   All tests related to this patch are successful: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-348/1/testReport/org.apache.hadoop.hbase.backup/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22640) Random init hstore lastFlushTime

2019-06-30 Thread Bing Xiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875938#comment-16875938
 ] 

Bing Xiao commented on HBASE-22640:
---

[~stack] Our inner version have not release yet,we will release this week and 
see the patch performance.

> Random init  hstore lastFlushTime
> -
>
> Key: HBASE-22640
> URL: https://issues.apache.org/jira/browse/HBASE-22640
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 3.0.0, 2.2.1
>
> Attachments: HBASE-22640-master-v1.patch
>
>
> During with open region use current time as hstore last flush time, and no 
> mush data put cause memstore flush, after flushCheckInterval all memstore 
> will flush together bring concentrated IO and compaction make high request 
> latency;So random init lastFlushTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22403) Balance in RSGroup should consider throttling and a failure affects the whole

2019-06-30 Thread Xiaolin Ha (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875917#comment-16875917
 ] 

Xiaolin Ha commented on HBASE-22403:


Ping [~zghaobac],please help to commit.

> Balance in RSGroup should consider throttling and a failure affects the whole
> -
>
> Key: HBASE-22403
> URL: https://issues.apache.org/jira/browse/HBASE-22403
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.2.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-22403.branch-1.001.patch, 
> HBASE-22403.branch-2.2.001.patch, HBASE-22403.branch-2.2.002.patch, 
> HBASE-22403.master.001.patch, HBASE-22403.master.002.patch, 
> HBASE-22403.master.003.patch, HBASE-22403.master.004.patch
>
>
> balanceRSGroup(groupName) excutes region move plans concurrently, which will 
> affect the availability of relevant tables. And a plan fails will cause the 
> whole balance plan abort.
> As mentioned in master balance issues, HBASE-17178, HBASE-21260



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] maoling commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-06-30 Thread GitBox
maoling commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r298863765
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1859,31 +1873,33 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList != null) {
+  for (String peerIdZNode : hFileRefChildrenNoWatchList) {
 
 Review comment:
   - to avoid too deep nest.
   ```
   if (hFileRefChildrenNoWatchList == null) {
   return;
   }
   ```
   - Others,LGTM
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22631) assign failed may make gced parent region appear again !!!

2019-06-30 Thread yuhuiyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuhuiyang updated HBASE-22631:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> assign failed may make gced parent region appear again !!!
> --
>
> Key: HBASE-22631
> URL: https://issues.apache.org/jira/browse/HBASE-22631
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.1.1
>Reporter: yuhuiyang
>Priority: Major
> Attachments: HBASE-22631-branch-2.1.01.patch, assignProcedure.txt
>
>
> When i assign a region A the process is as follows:
> step1 : A is assigned to rs1 , and rs1 fails to open it .
> step2 : assignProcedure handleFailure .
> step3 : A is assign to rs2 and rs success to open it .
> Above is the normal flow . However when rs1 is restart after the reigon A was 
> split and GCRegionProcedure was successed , the region A appare again !
> The region is that reigon A is not removed from the serverMap correctly when 
> assignprocedure handleFailure . Because the code regionNode.offline() make 
> the regionNode's regionLocation to be null and make regionNode's state to 
> OFFLINE . So when the code 
> env.getAssignmentManager().undoRegionAsOpening(regionNode) do nothing . So 
> when the rs1 restart event triggers a serverCrashProcedure, it will get 
> reigons from serverMap and it will get the region A then A will be assigned 
> and hdfs dir will be created. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #348: HBASE-22643 : Delete region without archiving only if regiondir is pr…

2019-06-30 Thread GitBox
Apache-HBase commented on issue #348: HBASE-22643 : Delete region without 
archiving only if regiondir is pr…
URL: https://github.com/apache/hbase/pull/348#issuecomment-507076754
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 239 | master passed |
   | +1 | compile | 52 | master passed |
   | +1 | checkstyle | 70 | master passed |
   | +1 | shadedjars | 265 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 209 | master passed |
   | +1 | javadoc | 34 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 242 | the patch passed |
   | +1 | compile | 52 | the patch passed |
   | +1 | javac | 52 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 267 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 733 | Patch does not cause any errors with Hadoop 2.8.5 
2.9.2 or 3.1.2. |
   | +1 | findbugs | 215 | the patch passed |
   | +1 | javadoc | 32 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 15029 | hbase-server in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 17900 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-348/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/348 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux f10f5b1d768d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 0c8dc5d97e |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-348/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-348/1/testReport/
 |
   | Max. process+thread count | 4692 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-348/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #315: HBASE-22594 Clean up for backup examples

2019-06-30 Thread GitBox
Apache-HBase commented on issue #315: HBASE-22594 Clean up for backup examples
URL: https://github.com/apache/hbase/pull/315#issuecomment-507066235
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 323 | master passed |
   | +1 | compile | 82 | master passed |
   | +1 | checkstyle | 168 | master passed |
   | +1 | shadedjars | 358 | branch has no errors when building our shaded 
downstream artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hbase-checkstyle |
   | +1 | findbugs | 294 | master passed |
   | +1 | javadoc | 57 | master passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 327 | the patch passed |
   | +1 | compile | 81 | the patch passed |
   | +1 | javac | 81 | the patch passed |
   | +1 | checkstyle | 157 | root: The patch generated 0 new + 0 unchanged - 26 
fixed = 0 total (was 26) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedjars | 365 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 983 | Patch does not cause any errors with Hadoop 2.8.5 
2.9.2 or 3.1.2. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hbase-checkstyle |
   | +1 | findbugs | 296 | the patch passed |
   | +1 | javadoc | 56 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 12 | hbase-checkstyle in the patch passed. |
   | -1 | unit | 16796 | hbase-server in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 20939 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.master.procedure.TestSCPWithReplicas |
   |   | hadoop.hbase.master.TestMasterShutdown |
   |   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/315 |
   | Optional Tests |  dupname  asflicense  checkstyle  javac  javadoc  unit  
xml  findbugs  shadedjars  hadoopcheck  hbaseanti  compile  |
   | uname | Linux 38c7eddde7cc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 0c8dc5d97e |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/7/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/7/testReport/
 |
   | Max. process+thread count | 4915 (vs. ulimit of 1) |
   | modules | C: hbase-checkstyle hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani commented on issue #345: HBASE-22638 : Zookeeper Utility enhancements

2019-06-30 Thread GitBox
virajjasani commented on issue #345: HBASE-22638 : Zookeeper Utility 
enhancements
URL: https://github.com/apache/hbase/pull/345#issuecomment-507057945
 
 
   Please review @HorizonNet 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani commented on issue #333: [HBASE-22606] : BucketCache additional tests

2019-06-30 Thread GitBox
virajjasani commented on issue #333: [HBASE-22606] : BucketCache additional 
tests
URL: https://github.com/apache/hbase/pull/333#issuecomment-507057814
 
 
   Please review @saintstack @wchevreuil 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani opened a new pull request #348: HBASE-22643 : Delete region without archiving only if regiondir is pr…

2019-06-30 Thread GitBox
virajjasani opened a new pull request #348: HBASE-22643 : Delete region without 
archiving only if regiondir is pr…
URL: https://github.com/apache/hbase/pull/348
 
 
   …esent


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-22643) Delete region without archiving only if regiondir is present

2019-06-30 Thread Viraj Jasani (JIRA)
Viraj Jasani created HBASE-22643:


 Summary: Delete region without archiving only if regiondir is 
present
 Key: HBASE-22643
 URL: https://issues.apache.org/jira/browse/HBASE-22643
 Project: HBase
  Issue Type: Improvement
  Components: HFile
Affects Versions: 3.0.0, 2.3.0, 2.3.1, 1.3.6, 1.4.11
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Put a condition to delete region without archiving only if regionDir is present



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-06-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875787#comment-16875787
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #163 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/163/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/163//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/163//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/163//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)