[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
Apache-HBase commented on pull request #1781: URL: https://github.com/apache/hbase/pull/1781#issuecomment-634445710 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 54s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 2m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 8m 51s | branch-1 passed | | +1 :green_heart: | compile | 1m 29s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | compile | 1m 36s | branch-1 passed with JDK v1.7.0_262 | | +1 :green_heart: | checkstyle | 2m 19s | branch-1 passed | | +1 :green_heart: | shadedjars | 4m 8s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 22s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 1m 36s | branch-1 passed with JDK v1.7.0_262 | | +0 :ok: | spotbugs | 3m 46s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 27s | branch-1 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 36s | the patch passed | | +1 :green_heart: | compile | 1m 26s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javac | 1m 26s | the patch passed | | +1 :green_heart: | compile | 1m 36s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | javac | 1m 36s | the patch passed | | +1 :green_heart: | checkstyle | 2m 38s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 3m 54s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 6m 52s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 1m 12s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 1m 29s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | findbugs | 5m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | hbase-hadoop-compat in the patch passed. | | +1 :green_heart: | unit | 0m 46s | hbase-hadoop2-compat in the patch passed. | | +1 :green_heart: | unit | 136m 25s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 1m 19s | The patch does not generate ASF License warnings. | | | | 202m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1781 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux fb91fea02627 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh | | git revision | branch-1 / 9f12ef0 | | Default Java | 1.7.0_262 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/testReport/ | | Max. process+thread count | 4749 (vs. ulimit of 1) | | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1764: HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
Apache-HBase commented on pull request #1764: URL: https://github.com/apache/hbase/pull/1764#issuecomment-634442764 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 7s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 48s | master passed | | +1 :green_heart: | compile | 0m 57s | master passed | | +1 :green_heart: | shadedjars | 6m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | +1 :green_heart: | compile | 0m 58s | the patch passed | | +1 :green_heart: | javac | 0m 58s | the patch passed | | +1 :green_heart: | shadedjars | 6m 2s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 197m 2s | hbase-server in the patch passed. | | | | 222m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1764 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1a20b026a643 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a9205f8f4d | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/testReport/ | | Max. process+thread count | 2857 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1774: HBASE-24389 Introduce a new master rpc service to locate meta region through root region
Apache-HBase commented on pull request #1774: URL: https://github.com/apache/hbase/pull/1774#issuecomment-634431891 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +0 :ok: | prototool | 0m 2s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 40s | master passed | | +1 :green_heart: | checkstyle | 2m 5s | master passed | | +1 :green_heart: | spotbugs | 6m 47s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | -0 :warning: | checkstyle | 0m 32s | hbase-client: The patch generated 3 new + 226 unchanged - 1 fixed = 229 total (was 227) | | -0 :warning: | checkstyle | 1m 9s | hbase-server: The patch generated 4 new + 264 unchanged - 1 fixed = 268 total (was 265) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 40s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 2m 45s | the patch passed | | +1 :green_heart: | spotbugs | 9m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 50s | The patch does not generate ASF License warnings. | | | | 54m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1774 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle cc hbaseprotoc prototool | | uname | Linux c9b86f8c4807 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 476cb16232 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-zookeeper hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/4/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] anoopsjohn commented on a change in pull request #1783: HBASE-24436 The store file open and close thread pool should be share…
anoopsjohn commented on a change in pull request #1783: URL: https://github.com/apache/hbase/pull/1783#discussion_r430859985 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java ## @@ -695,6 +695,7 @@ void sawNoSuchFamily() { private RegionCoprocessorHost coprocessorHost; private TableDescriptor htableDescriptor = null; + private ThreadPoolExecutor storeFileOpenAndCloseThreadPool; Review comment: If I understand the jira correctly, what you are trying to solve is below case. One region with say 2 stores. Store1 having much more files than other. Say the config for the #threads in open pool is 10. Now it will create 2 pools for each store with 5 threads each. The Store2 will get finished soon. But store1 will take much longer.So if it was a shared pool of 10 threads the overall time for opening both stores would have been lesser. my understanding correct? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24379) CatalogJanitor misreports region holes when there are actually over laps.
[ https://issues.apache.org/jira/browse/HBASE-24379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117394#comment-17117394 ] Hudson commented on HBASE-24379: Results for branch branch-2.3 [build #106 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CatalogJanitor misreports region holes when there are actually over laps. > - > > Key: HBASE-24379 > URL: https://issues.apache.org/jira/browse/HBASE-24379 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > We found a case that there is actually an overlap, but a region hole is > reported. > r1: [aa, bb), r2: [cc, dd), r3: [a, cc) > > In this case, there are only overlaps from "a" to "d". However, hole (r1, r2) > is reported. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24416) RegionNormalizer spliting region should not be limited by hbase.normalizer.min.region.count
[ https://issues.apache.org/jira/browse/HBASE-24416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117393#comment-17117393 ] Hudson commented on HBASE-24416: Results for branch branch-2.3 [build #106 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionNormalizer spliting region should not be limited by > hbase.normalizer.min.region.count > --- > > Key: HBASE-24416 > URL: https://issues.apache.org/jira/browse/HBASE-24416 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1 >Reporter: Sun Xin >Assignee: Sun Xin >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > In method computePlanForTable of SimpleRegionNormalizer: > we will skip spliting region if the number of regions in the table is less > than hbase.normalizer.min.region.count, even if there is a huge region in the > table. > {code:java} > ... > if (tableRegions == null || tableRegions.size() < minRegionCount) { > ... > return null; > } > ... > // get region split plan > if (splitEnabled) { > List splitPlans = getSplitNormalizationPlan(table); > if (splitPlans != null) { > plans.addAll(splitPlans); > } > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24437) Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact
[ https://issues.apache.org/jira/browse/HBASE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117395#comment-17117395 ] Hudson commented on HBASE-24437: Results for branch branch-2.3 [build #106 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/106/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact > --- > > Key: HBASE-24437 > URL: https://issues.apache.org/jira/browse/HBASE-24437 > Project: HBase > Issue Type: Bug > Components: meta, test >Reporter: Huaxiang Sun >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: > 0001-HBASE-24437-Flaky-test-TestLocalRegionOnTwoFileSyste.patch > > > {code:java} > precommit checks / yetus jdk8 Hadoop2 checks / > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompactFailing > for the past 1 build (Since #1 )Took 17 sec.Error MessageWaiting timed out > after [15,000] msecStacktracejava.lang.AssertionError: Waiting timed out > after [15,000] msec > at > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompact(TestLocalRegionOnTwoFileSystems.java:178) > Standard OutputFormatting using clusterid: testClusterID > Standard Error2020-05-26 00:26:29,624 INFO [main] > hbase.HBaseClassTestRule(94): Test class > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems timeout: > 13 mins > 2020-05-26 00:26:30,158 DEBUG [main] hbase.HBaseTestingUtility(348): Setting > hbase.rootdir to > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf > 2020-05-26 00:26:30,231 INFO [Time-limited test] > hbase.HBaseTestingUtility(1114): Starting up minicluster with option: > StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, > rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, > createRootDir=false, createWALDir=false} > 2020-05-26 00:26:30,232 INFO [Time-limited test] > hbase.HBaseZKTestingUtility(83): Created new mini-cluster data directory: > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf/cluster_ddea42b7-f6f1-92fe-0685-674774d0fce2, > deleteOnExit=true > 2020-05-26 00:26:30,233 INFO [Time-limited test] > hbase.HBaseTestingUtility(1128): STARTING DFS {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1764: HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
Apache-HBase commented on pull request #1764: URL: https://github.com/apache/hbase/pull/1764#issuecomment-634423961 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 18s | master passed | | +1 :green_heart: | compile | 1m 3s | master passed | | +1 :green_heart: | shadedjars | 5m 43s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 1m 4s | the patch passed | | +1 :green_heart: | javac | 1m 4s | the patch passed | | +1 :green_heart: | shadedjars | 5m 41s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 39s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 128m 57s | hbase-server in the patch passed. | | | | 155m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1764 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux fe5dea073174 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a9205f8f4d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/testReport/ | | Max. process+thread count | 4366 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations
Apache-HBase commented on pull request #1786: URL: https://github.com/apache/hbase/pull/1786#issuecomment-634422331 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 42s | master passed | | +1 :green_heart: | compile | 1m 8s | master passed | | +1 :green_heart: | shadedjars | 6m 21s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 24s | the patch passed | | +1 :green_heart: | compile | 1m 9s | the patch passed | | +1 :green_heart: | javac | 1m 9s | the patch passed | | +1 :green_heart: | shadedjars | 6m 20s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 192m 0s | hbase-server in the patch passed. | | | | 219m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1786 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9505a86dd57e 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a9205f8f4d | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/1/testReport/ | | Max. process+thread count | 3266 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1786/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-connectors] saintstack commented on a change in pull request #68: HBASE-20999 Move hbase-REST to new hbase-connectors repository
saintstack commented on a change in pull request #68: URL: https://github.com/apache/hbase-connectors/pull/68#discussion_r430850242 ## File path: hbase-connectors-protocol-shaded/pom.xml ## @@ -0,0 +1,275 @@ + Review comment: License is missing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-20999) Move hbase-REST to new hbase-connectors repository
[ https://issues.apache.org/jira/browse/HBASE-20999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-20999: -- Fix Version/s: 3.0.0-alpha-1 > Move hbase-REST to new hbase-connectors repository > -- > > Key: HBASE-20999 > URL: https://issues.apache.org/jira/browse/HBASE-20999 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, REST >Reporter: Michael Stack >Assignee: zhuqi >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > Move hbase-rest to the new hbase-connectors repository. See parent issue for > context and locale. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-20999) Move hbase-REST to new hbase-connectors repository
[ https://issues.apache.org/jira/browse/HBASE-20999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-20999: -- Fix Version/s: 2.4.0 > Move hbase-REST to new hbase-connectors repository > -- > > Key: HBASE-20999 > URL: https://issues.apache.org/jira/browse/HBASE-20999 > Project: HBase > Issue Type: Sub-task > Components: hbase-connectors, REST >Reporter: Michael Stack >Assignee: zhuqi >Priority: Major > Fix For: 2.4.0 > > > Move hbase-rest to the new hbase-connectors repository. See parent issue for > context and locale. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-connectors] CodingBen opened a new pull request #68: HBASE-20999 Move hbase-REST to new hbase-connectors repository
CodingBen opened a new pull request #68: URL: https://github.com/apache/hbase-connectors/pull/68 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 merged pull request #1787: HBASE-24437 Addendum just start mini dfs cluster, and make the log ro…
Apache9 merged pull request #1787: URL: https://github.com/apache/hbase/pull/1787 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 merged pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
Apache9 merged pull request #1746: URL: https://github.com/apache/hbase/pull/1746 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
saintstack commented on a change in pull request #1746: URL: https://github.com/apache/hbase/pull/1746#discussion_r430831105 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java ## @@ -1403,6 +1403,21 @@ private static void deleteFromMetaTable(final Connection connection, final List< } } + public static Delete removeRegionReplica(byte[] metaRow, int replicaIndexToDeleteFrom, +int numReplicasToRemove) { +int absoluteIndex = replicaIndexToDeleteFrom + numReplicasToRemove; +long now = EnvironmentEdgeManager.currentTime(); +Delete deleteReplicaLocations = new Delete(metaRow); +for (int i = replicaIndexToDeleteFrom; i < absoluteIndex; i++) { + deleteReplicaLocations.addColumns(getCatalogFamily(), getServerColumn(i), now); + deleteReplicaLocations.addColumns(getCatalogFamily(), getSeqNumColumn(i), now); + deleteReplicaLocations.addColumns(getCatalogFamily(), getStartCodeColumn(i), now); + deleteReplicaLocations.addColumns(getCatalogFamily(), getServerNameColumn(i), now); + deleteReplicaLocations.addColumns(getCatalogFamily(), getRegionStateColumn(i), now); Review comment: Man. Region Replicas are messy in hbase:meta. Not your fault. For later. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -866,8 +874,50 @@ protected void initializeZKBasedSystemTrackers() // Will be overriden in test to inject customized AssignmentManager @VisibleForTesting - protected AssignmentManager createAssignmentManager(MasterServices master) { -return new AssignmentManager(master); + protected AssignmentManager createAssignmentManager(MasterServices master, +LocalStore localStore) { +return new AssignmentManager(master, localStore); Review comment: Good. Was going to suggest AM ask the Master for its localStore but that probably TMI for the AM to know of. This is better separation of concerns. Good. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/store/LocalStore.java ## @@ -87,6 +90,10 @@ public static final byte[] PROC_FAMILY = Bytes.toBytes("proc"); private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME) + .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY) + .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true) + .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL) + .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build()) Review comment: Yeah, the ROW_INDEX_V1 is good change. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java ## @@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException { // Start the Assignment Thread startAssignmentThread(); -// load meta region state -ZKWatcher zkw = master.getZooKeeper(); Review comment: Radical. No ZKW in AM. Good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] clarax commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
clarax commented on a change in pull request #1781: URL: https://github.com/apache/hbase/pull/1781#discussion_r430772485 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java ## @@ -592,9 +593,9 @@ public void markRegionsRecovering(ServerName server, Set userRegion * @return whether log is replaying */ public boolean isLogReplaying() { -if (server.getCoordinatedStateManager() == null) return false; -return ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) -.getSplitLogManagerCoordination().isReplaying(); +CoordinatedStateManager m = server.getCoordinatedStateManager(); +if (m == null) return false; Review comment: Checkstyle conflicts need to be fixed by adding braces. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java ## @@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) { int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize); conf.setIfUnset(dfsKey, Integer.toString(hbaseSize)); } + + /** + * @param c + * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on hdfs. + * @throws IOException Review comment: can be removed ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java ## @@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) { int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize); conf.setIfUnset(dfsKey, Integer.toString(hbaseSize)); } + + /** + * @param c Review comment: can be removed. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java ## @@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) { int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize); conf.setIfUnset(dfsKey, Integer.toString(hbaseSize)); } + + /** + * @param c + * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on hdfs. + * @throws IOException + */ + public static DFSHedgedReadMetrics getDFSHedgedReadMetrics(final Configuration c) + throws IOException { +if (!isHDFS(c)) return null; Review comment: Checkstyle conflict. Need braces. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -40,7 +41,9 @@ import org.apache.hadoop.hbase.io.hfile.CacheStats; import org.apache.hadoop.hbase.wal.WALProvider; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; +import org.apache.hadoop.hdfs.DFSHedgedReadMetrics; Review comment: This is from 2.4. https://issues.apache.org/jira/browse/HBASE-7509 Should be good. ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java ## @@ -435,4 +440,146 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception { } } + /** + * Ugly test that ensures we can get at the hedged read counters in dfsclient. + * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread. + * @throws Exception Review comment: Not necessary. ## File path: hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java ## @@ -504,6 +504,11 @@ public void getMetrics(MetricsCollector metricsCollector, boolean all) { rsWrap.getCompactedCellsSize()) .addCounter(Interns.info(MAJOR_COMPACTED_CELLS_SIZE, MAJOR_COMPACTED_CELLS_SIZE_DESC), rsWrap.getMajorCompactedCellsSize()) + + .addCounter(Interns.info(HEDGED_READS, HEDGED_READS_DESC), rsWrap.getHedgedReadOps()) + .addCounter(Interns.info(HEDGED_READ_WINS, HEDGED_READ_WINS_DESC), + rsWrap.getHedgedReadWins()) Review comment: Validated from HBASE15550. LGTM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #1755: HBASE-24069 Provide an ExponentialBackOffPolicy sleep between failed …
bharathv commented on a change in pull request #1755: URL: https://github.com/apache/hbase/pull/1755#discussion_r430767349 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1972,6 +1975,13 @@ private void unassign(final HRegionInfo region, final RegionState state, final int versionOfClosingNode, final ServerName dest, final boolean transitionInZK, final ServerName src) { +String encodedName = region.getEncodedName(); +AtomicInteger failedCloseCount = failedCloseTracker.get(encodedName); +if (failedCloseCount == null) { + failedCloseCount = new AtomicInteger(); + failedCloseTracker.put(encodedName, failedCloseCount); Review comment: Aren't all the codepaths reaching this point, expected to take an exclusive lock on the region.encodedName()? If so, wondering if we should worry about the non-thread-safe access for this map. I checked all the callers, all except one path in forceRegionStateToOffline() follow this pattern, we should probably fix that. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1997,80 +2007,76 @@ private void unassign(final HRegionInfo region, } try { // Send CLOSE RPC -if (serverManager.sendRegionClose(server, region, - versionOfClosingNode, dest, transitionInZK)) { - LOG.debug("Sent CLOSE to " + server + " for region " + -region.getRegionNameAsString()); +if (serverManager.sendRegionClose(server, region, versionOfClosingNode, dest, + transitionInZK)) { + LOG.debug("Sent CLOSE to " + server + " for region " + region.getRegionNameAsString()); if (useZKForAssignment && !transitionInZK && state != null) { // Retry to make sure the region is // closed so as to avoid double assignment. -unassign(region, state, versionOfClosingNode, - dest, transitionInZK, src); +unassign(region, state, versionOfClosingNode, dest, transitionInZK, src); } return; } // This never happens. Currently regionserver close always return true. // Todo; this can now happen (0.96) if there is an exception in a coprocessor -LOG.warn("Server " + server + " region CLOSE RPC returned false for " + - region.getRegionNameAsString()); +LOG.warn("Server " + server + " region CLOSE RPC returned false for " ++ region.getRegionNameAsString()); } catch (Throwable t) { long sleepTime = 0; Configuration conf = this.server.getConfiguration(); if (t instanceof RemoteException) { - t = ((RemoteException)t).unwrapRemoteException(); + t = ((RemoteException) t).unwrapRemoteException(); } boolean logRetries = true; -if (t instanceof RegionServerAbortedException -|| t instanceof RegionServerStoppedException +if (t instanceof RegionServerAbortedException || t instanceof RegionServerStoppedException || t instanceof ServerNotRunningYetException) { // RS is aborting or stopping, we cannot offline the region since the region may need - // to do WAL recovery. Until we see the RS expiration, we should retry. + // to do WAL recovery. Until we see the RS expiration, we should retry. sleepTime = 1L + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); } else if (t instanceof NotServingRegionException) { - LOG.debug("Offline " + region.getRegionNameAsString() -+ ", it's not any more on " + server, t); + LOG.debug( +"Offline " + region.getRegionNameAsString() + ", it's not any more on " + server, t); if (transitionInZK) { deleteClosingOrClosedNode(region, server); } if (state != null) { regionOffline(region); } return; -} else if ((t instanceof FailedServerException) || (state != null && -t instanceof RegionAlreadyInTransitionException)) { - if (t instanceof FailedServerException) { -sleepTime = 1L + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, +} else if ((t instanceof FailedServerException) Review comment: Is there any change in functionality of this section of diff? I think the answer is no and its mostly indents, but I wanted to double check..can you please confirm? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -2079,16 +2085,29 @@ private void unassign(final HRegionInfo region, } if (logRetries) { - LOG.info("Server " + server + " returned " + t + " for " -+ region.getRegionNameAsString() + ",
[GitHub] [hbase] VicoWu commented on a change in pull request #1764: HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
VicoWu commented on a change in pull request #1764: URL: https://github.com/apache/hbase/pull/1764#discussion_r430826279 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java ## @@ -879,13 +881,21 @@ public void bulkHFile(ColumnFamilyDescriptorBuilder builder, FileStatus hfileSta } int maxRetries = getConf().getInt(HConstants.BULKLOAD_MAX_RETRIES_NUMBER, 10); - maxRetries = Math.max(maxRetries, startEndKeys.size() + 1); + + /** + * For the first attempt, we make maxRetries with the configured maximum retry number + * As long as we find that region number changed, we setup maxRetries to region number + * But if we find that the region is not changed, then the maxRetries should be still + * be configured BULKLOAD_MAX_RETRIES_NUMBER to avoid meaningless retry attempts + */ + if(count != 0 && previousRegionNum != startEndKeys.size() ) Review comment: @anoopsjohn Yes, i agree with you , my above solution is not generic; After totally review the whole calling path from my business code , to the HBase client retry logic and then finally to the server processing logic, I think things are clear now and the fix becomes more easier; Firstly, in the RegionServer side, the critical unrecoverable exception is swallowed and I think this is incorrect: [SecureBulkLoadManager.java#297](https://github.com/apache/hbase/blob/a9205f8f4d98ee672c0c6aa9cafa5ef2afc6aab5/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java#L297), it eat all exceptions thrown from `region.bulkLoadHFiles`, this is obviously incorrect. I checked the commit history of this `catch` clause and it seems that it is a long-history commit. Whether or not exceptions are thrown will impact the Client side retry logic, which means, only when exception is thrown, the client side could get the error information(Refer [BulkLoadHFilesTool.java#396](https://github.com/apache/hbase/blob/a9205f8f4d98ee672c0c6aa9cafa5ef2afc6aab5/hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L397) ) and set up the retry policy; Secondly, the exception in my incident is not processed correctly: [SecureBulkLoadManager.java#L395](https://github.com/apache/hbase/blob/a9205f8f4d98ee672c0c6aa9cafa5ef2afc6aab5/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java#L395) The exception information is `java.lang.IllegalArgumentException: Wrong FS: hdfs://warehousestore/tmp/mdm_user_segments_2798fd21-0961-43ff-8bd2-dcf0180c6918/h/d7a380a0cf1f4adbb459e9f0d0cf66c` So I think all HDFS related code in the method should all be catched and thrown as IOException, instead of just rename() failed issue; After above 2 issues are processed correctly, then client side will get the error information correctly at [BulkLoadHFilesTool.java#396](https://github.com/apache/hbase/blob/a9205f8f4d98ee672c0c6aa9cafa5ef2afc6aab5/hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java#L397) and then from the code, we could find that user could control the retry logic by configuration `hbase.client.retries.number` and `hbase.bulkload.retries.retryOnIOException`, instead of falling into retry disaster; ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/tool/BulkLoadHFilesTool.java ## @@ -879,13 +881,21 @@ public void bulkHFile(ColumnFamilyDescriptorBuilder builder, FileStatus hfileSta } int maxRetries = getConf().getInt(HConstants.BULKLOAD_MAX_RETRIES_NUMBER, 10); - maxRetries = Math.max(maxRetries, startEndKeys.size() + 1); + + /** + * For the first attempt, we make maxRetries with the configured maximum retry number + * As long as we find that region number changed, we setup maxRetries to region number + * But if we find that the region is not changed, then the maxRetries should be still + * be configured BULKLOAD_MAX_RETRIES_NUMBER to avoid meaningless retry attempts + */ + if(count != 0 && previousRegionNum != startEndKeys.size() ) Review comment: @anoopsjohn Yes, i agree with you , my above solution is not generic; After totally review the whole calling path from 1) my business code , to 2) the HBase client retry logic and then finally to 3)the server processing logic, I think things are clear now and the fix becomes more easier; Firstly, in the RegionServer side, the critical unrecoverable exception is swallowed and I think this is incorrect: [SecureBulkLoadManager.java#297](https://github.com/apache/hbase/blob/a9205f8f4d98ee672c0c6aa9cafa5ef2afc6aab5/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java#L297), it eat all exceptions thrown from `region.bulkLoadHFiles`, this is obviously incorrect. I checked the commit history of
[GitHub] [hbase] Apache-HBase commented on pull request #1777: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
Apache-HBase commented on pull request #1777: URL: https://github.com/apache/hbase/pull/1777#issuecomment-633749711 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv merged pull request #1779: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bharathv merged pull request #1779: URL: https://github.com/apache/hbase/pull/1779 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1779: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
Apache-HBase commented on pull request #1779: URL: https://github.com/apache/hbase/pull/1779#issuecomment-633847000 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] pengmq1 commented on a change in pull request #1730: HBASE-24289 Heterogeneous Storage for Date Tiered Compaction
pengmq1 commented on a change in pull request #1730: URL: https://github.com/apache/hbase/pull/1730#discussion_r430830542 ## File path: dev-support/design-docs/HBASE-24289-Heterogeneous Storage for Date Tiered Compaction.md ## @@ -0,0 +1,122 @@ + + +# Heterogeneous Storage for Date Tiered Compaction + +## Objective + +Support DateTiredCompaction([HBASE-15181](https://issues.apache.org/jira/browse/HBASE-15181)) + for cold and hot data separation, support different storage policies for different time periods + of data to get better performance, for example, we can configure the data of last 1 month in SSD, + and 1 month ago data was in HDD. + ++ Date Tiered Compaction (DTCP) is based on date tiering (date-aware), we hope to support + the separation of cold and hot data, heterogeneous storage. Set different storage + policies (in HDFS) for data in different time windows. ++ DTCP designs different windows, and we can classify the windows according to + the timestamps of the windows. For example: HOT window, WARM window, COLD window. ++ DTCP divides storefiles into different windows, and performs minor Compaction within + a time window. The storefile generated by Compaction will use the storage strategy of + this window. For example, if a window is a HOT window, the storefile generated by compaction + can be stored on the SSD. There are already WAL and the entire CF support storage policy + (HBASE-12848, HBASE-14061), our goal is to achieve cold and hot separation in one CF or + a region, using different storage policies. + +## Definition of hot and cold data + +Usually the data of the last 3 days can be defined as `HOT data`, hot age = 3 days. + If the timestamp of the data is > (timestamp now - hot age), we think the data is hot data. + Warm age, cold age can be defined in the same way. Only one type of data is allowed. + ``` + if timestamp > (now - hot age) , HOT data + else if timestamp > (now - warm age), WARM data + else if timestamp > (now - cold age), COLD data + else default, COLD data +``` + +## Time window +When given a time now, it is the time when the compaction occurs. Each window and the size of + the window are automatically calculated by DTCP, and the window boundary is rounded according + to the base size. +Assuming that the base window size is 1 hour, and each tier has 3 windows, the current time is + between 12:00 and 13:00. We have defined three types of winow (`HOT, WARM, COLD`). The type of + winodw is determined by the timestamp at the beginning of the window and the timestamp now. +As shown in the figure 1 below, the type of each window can be determined by the age range + (hot / warm / cold) where (now - window.startTimestamp) falls. Cold age can not need to be set, + the default Long.MAX, meaning that the window with a very early time stamp belongs to the + cold window. +![figure 1](https://raw.githubusercontent.com/pengmq1/images/master/F1-HDTCP.png "figure 1") + +## Example configuration + +| Configuration Key | value | Note | +|:---|:---:|:---| +|hbase.hstore.compaction.date.tiered.storage.policy.enable|true|if or not use storage policy for window. Default is false| +|hbase.hstore.compaction.date.tiered.hot.window.age.millis|360|hot data age +|hbase.hstore.compaction.date.tiered.hot.window.storage.policy|ALL_SSD|hot data storage policy, Corresponding HDFS storage policy +|hbase.hstore.compaction.date.tiered.warm.window.age.millis|2060|| +|hbase.hstore.compaction.date.tiered.warm.window.storage.policy|ONE_SSD|| +|hbase.hstore.compaction.date.tiered.cold.window.age.millis|Long.MAX|| +|hbase.hstore.compaction.date.tiered.cold.window.storage.policy|HOT|| + +The original date tiered compaction related configuration has the same meaning and maintains Review comment: If `hbase.hstore.compaction.date.tiered.storage.policy.enable` is true, this will override CF config storage policy, and `hbase.hstore.block.storage.policy` does not work. Because storefile must belong to **one window** and will use window storage policy ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DateTieredMultiFileWriter.java ## @@ -38,23 +38,34 @@ private final boolean needEmptyFile; + private final Map lowerBoundariesPolicies; + /** + * @param lowerBoundariesPolicies each window to storage policy map. * @param needEmptyFile whether need to create an empty store file if we haven't written out * anything. */ - public DateTieredMultiFileWriter(List lowerBoundaries, boolean needEmptyFile) { + public DateTieredMultiFileWriter(List lowerBoundaries, + Map lowerBoundariesPolicies, boolean needEmptyFile) { for (Long lowerBoundary : lowerBoundaries) { lowerBoundary2Writer.put(lowerBoundary, null); } this.needEmptyFile = needEmptyFile; +this.lowerBoundariesPolicies = lowerBoundariesPolicies; } @Override public void
[GitHub] [hbase] javierluca commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
javierluca commented on pull request #1781: URL: https://github.com/apache/hbase/pull/1781#issuecomment-633963904 I see. Seems line number was not added in the original commit: https://github.com/apache/hbase/commit/71ed7033675149956de855b6782e1e22fc908dc8 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #769: HBASE-23202 ExportSnapshot (import) will fail if copying files to root directory takes longer than cleaner TTL
huaxiangsun commented on a change in pull request #769: URL: https://github.com/apache/hbase/pull/769#discussion_r430663274 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java ## @@ -251,6 +261,31 @@ private void refreshCache() throws IOException { this.snapshots.putAll(newSnapshots); } + @VisibleForTesting + List getSnapshotsInProgress() throws IOException { +List snapshotInProgress = Lists.newArrayList(); +// only add those files to the cache, but not to the known snapshots +Path snapshotTmpDir = new Path(snapshotDir, SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME); +FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir); +if (running != null) { + for (FileStatus run : running) { +try { + snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath())); +} catch (CorruptedSnapshotException e) { + // See HBASE-16464 + if (e.getCause() instanceof FileNotFoundException) { +// If the snapshot is corrupt, we will delete it +fs.delete(run.getPath(), true); +LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause()); Review comment: Yeah, if it reads into the middle of copying manifest files, it is ok to remove this snapshot as copying HFiles has not started yet. So there is no impact for the logic in snapshotCleaner. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java ## @@ -251,6 +261,31 @@ private void refreshCache() throws IOException { this.snapshots.putAll(newSnapshots); } + @VisibleForTesting + List getSnapshotsInProgress() throws IOException { +List snapshotInProgress = Lists.newArrayList(); +// only add those files to the cache, but not to the known snapshots +Path snapshotTmpDir = new Path(snapshotDir, SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME); +FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir); +if (running != null) { + for (FileStatus run : running) { +try { + snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath())); +} catch (CorruptedSnapshotException e) { + // See HBASE-16464 + if (e.getCause() instanceof FileNotFoundException) { +// If the snapshot is corrupt, we will delete it +fs.delete(run.getPath(), true); +LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause()); Review comment: The logic of getUnreferencedFiles() is that for an HFile which is not in cache, it will refreshCache to get the latest snapshot hfiles. If one hfile from this exortSnapshot job is in the list, this means that manifest files have been copied over, so refreshCache() will get the latest snapshot file list. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java ## @@ -251,6 +261,31 @@ private void refreshCache() throws IOException { this.snapshots.putAll(newSnapshots); } + @VisibleForTesting + List getSnapshotsInProgress() throws IOException { +List snapshotInProgress = Lists.newArrayList(); +// only add those files to the cache, but not to the known snapshots +Path snapshotTmpDir = new Path(snapshotDir, SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME); +FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir); +if (running != null) { + for (FileStatus run : running) { +try { + snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath())); +} catch (CorruptedSnapshotException e) { + // See HBASE-16464 + if (e.getCause() instanceof FileNotFoundException) { +// If the snapshot is corrupt, we will delete it +fs.delete(run.getPath(), true); +LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause()); Review comment: @busbey @z-york Unless you see something missing, I think this one is good to go, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1771: HBASE-24425 Run hbck_chore_run and catalogjanitor_run on draw of 'HBC…
Apache-HBase commented on pull request #1771: URL: https://github.com/apache/hbase/pull/1771#issuecomment-634229560 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bsglz commented on pull request #1767: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bsglz commented on pull request #1767: URL: https://github.com/apache/hbase/pull/1767#issuecomment-633820229 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
Reidd commented on a change in pull request #1781: URL: https://github.com/apache/hbase/pull/1781#discussion_r430829094 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() { return FSDataInputStreamWrapper.getZeroCopyBytesRead(); } + @Override + public long getHedgedReadOps() { +return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadOps(); Review comment: nit, space between 'null?', '0:' ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() { return FSDataInputStreamWrapper.getZeroCopyBytesRead(); } + @Override + public long getHedgedReadOps() { +return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadOps(); + } + + @Override + public long getHedgedReadWins() { +return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadWins(); Review comment: ditto ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() { return FSDataInputStreamWrapper.getZeroCopyBytesRead(); } + @Override + public long getHedgedReadOps() { +return this.dfsHedgedReadMetrics == null? 0 : this.dfsHedgedReadMetrics.getHedgedReadOps(); Review comment: space between 'null?', still missing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
apurtell commented on a change in pull request #1781: URL: https://github.com/apache/hbase/pull/1781#discussion_r430741232 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -40,7 +41,9 @@ import org.apache.hadoop.hbase.io.hfile.CacheStats; import org.apache.hadoop.hbase.wal.WALProvider; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; +import org.apache.hadoop.hdfs.DFSHedgedReadMetrics; Review comment: In what Hadoop version were these introduced? If at least 2.7, its definitely fine. If 2.8, probably ok. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java ## @@ -592,9 +593,9 @@ public void markRegionsRecovering(ServerName server, Set userRegion * @return whether log is replaying */ public boolean isLogReplaying() { -if (server.getCoordinatedStateManager() == null) return false; -return ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) -.getSplitLogManagerCoordination().isReplaying(); +CoordinatedStateManager m = server.getCoordinatedStateManager(); Review comment: Unrelated changes. Remove This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #1786: HBASE-24418 Consolidate Normalizer implementations
ndimiduk opened a new pull request #1786: URL: https://github.com/apache/hbase/pull/1786 Simplify our Normalizer story to have just a single, configurable implementation. * fold the features of `MergeNormalizer` into `SimpleRegionNormalizer`, removing the intermediate abstract class. * configuration keys for merge-only features now share a common structure. * add configuration to selectively disable normalizer split/merge operations. * `RegionNormalizer` now extends `Configurable` instead of creating a new instance of `HBaseConfiguration` or snooping one off of other fields. * avoid the extra RPCs by using `MasterServices` instead of `MasterRpcServices`. * boost test coverage of all the various flags and feature combinations. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] mattf-apache commented on pull request #1620: HBASE-23339 Release scripts should use forwarded gpg-agent
mattf-apache commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-634270546 Looking... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations
Apache-HBase commented on pull request #1786: URL: https://github.com/apache/hbase/pull/1786#issuecomment-634369412 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1730: HBASE-24289 Heterogeneous Storage for Date Tiered Compaction
infraio commented on a change in pull request #1730: URL: https://github.com/apache/hbase/pull/1730#discussion_r430245575 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java ## @@ -547,6 +553,22 @@ public StoreFileWriter build() throws IOException { CommonFSUtils.setStoragePolicy(this.fs, dir, policyName); if (filePath == null) { +// The stored file and related blocks will used the directory based StoragePolicy. +// Because HDFS DistributedFileSystem does not support create files with storage policy +// before version 3.3.0 (See HDFS-13209). Use child dir here is to make stored files +// satisfy the specific storage policy when writing. So as to avoid later data movement. +// We don't want to change whole temp dir to 'fileStoragePolicy'. +if (fileStoragePolicy != null && !fileStoragePolicy.isEmpty()) { + dir = new Path(dir, HConstants.STORAGE_POLICY_PREFIX + fileStoragePolicy); + if (!fs.exists(dir)) { +HRegionFileSystem.mkdirs(fs, conf, dir); + } + CommonFSUtils.setStoragePolicy(this.fs, dir, fileStoragePolicy); + if (LOG.isDebugEnabled()) { +LOG.debug( Review comment: Only log once when region created? If so, can use info log. ## File path: dev-support/design-docs/HBASE-24289-Heterogeneous Storage for Date Tiered Compaction.md ## @@ -0,0 +1,122 @@ + + +# Heterogeneous Storage for Date Tiered Compaction + +## Objective + +Support DateTiredCompaction([HBASE-15181](https://issues.apache.org/jira/browse/HBASE-15181)) + for cold and hot data separation, support different storage policies for different time periods + of data to get better performance, for example, we can configure the data of last 1 month in SSD, + and 1 month ago data was in HDD. + ++ Date Tiered Compaction (DTCP) is based on date tiering (date-aware), we hope to support + the separation of cold and hot data, heterogeneous storage. Set different storage + policies (in HDFS) for data in different time windows. ++ DTCP designs different windows, and we can classify the windows according to + the timestamps of the windows. For example: HOT window, WARM window, COLD window. ++ DTCP divides storefiles into different windows, and performs minor Compaction within + a time window. The storefile generated by Compaction will use the storage strategy of + this window. For example, if a window is a HOT window, the storefile generated by compaction + can be stored on the SSD. There are already WAL and the entire CF support storage policy + (HBASE-12848, HBASE-14061), our goal is to achieve cold and hot separation in one CF or + a region, using different storage policies. + +## Definition of hot and cold data + +Usually the data of the last 3 days can be defined as `HOT data`, hot age = 3 days. + If the timestamp of the data is > (timestamp now - hot age), we think the data is hot data. + Warm age, cold age can be defined in the same way. Only one type of data is allowed. + ``` + if timestamp > (now - hot age) , HOT data + else if timestamp > (now - warm age), WARM data + else if timestamp > (now - cold age), COLD data + else default, COLD data +``` + +## Time window +When given a time now, it is the time when the compaction occurs. Each window and the size of + the window are automatically calculated by DTCP, and the window boundary is rounded according + to the base size. +Assuming that the base window size is 1 hour, and each tier has 3 windows, the current time is + between 12:00 and 13:00. We have defined three types of winow (`HOT, WARM, COLD`). The type of + winodw is determined by the timestamp at the beginning of the window and the timestamp now. +As shown in the figure 1 below, the type of each window can be determined by the age range + (hot / warm / cold) where (now - window.startTimestamp) falls. Cold age can not need to be set, + the default Long.MAX, meaning that the window with a very early time stamp belongs to the + cold window. +![figure 1](https://raw.githubusercontent.com/pengmq1/images/master/F1-HDTCP.png "figure 1") + +## Example configuration + +| Configuration Key | value | Note | +|:---|:---:|:---| +|hbase.hstore.compaction.date.tiered.storage.policy.enable|true|if or not use storage policy for window. Default is false| +|hbase.hstore.compaction.date.tiered.hot.window.age.millis|360|hot data age +|hbase.hstore.compaction.date.tiered.hot.window.storage.policy|ALL_SSD|hot data storage policy, Corresponding HDFS storage policy +|hbase.hstore.compaction.date.tiered.warm.window.age.millis|2060|| +|hbase.hstore.compaction.date.tiered.warm.window.storage.policy|ONE_SSD|| +|hbase.hstore.compaction.date.tiered.cold.window.age.millis|Long.MAX|| +|hbase.hstore.compaction.date.tiered.cold.window.storage.policy|HOT|| + +The original date tiered compaction related
[GitHub] [hbase] Apache-HBase commented on pull request #1785: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
Apache-HBase commented on pull request #1785: URL: https://github.com/apache/hbase/pull/1785#issuecomment-634301027 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 10s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ branch-2.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 21s | branch-2.2 passed | | +1 :green_heart: | compile | 0m 59s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 1m 22s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | branch-2.2 passed | | +0 :ok: | spotbugs | 3m 25s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 23s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 46s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | checkstyle | 1m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 26m 21s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | | +1 :green_heart: | findbugs | 3m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 166m 22s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | The patch does not generate ASF License warnings. | | | | 234m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1785/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1785 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 48d5d207 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1785/out/precommit/personality/provided.sh | | git revision | branch-2.2 / abe5a05bd1 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1785/1/testReport/ | | Max. process+thread count | 4701 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1785/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1776: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
Apache-HBase commented on pull request #1776: URL: https://github.com/apache/hbase/pull/1776#issuecomment-633750078 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv merged pull request #1778: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bharathv merged pull request #1778: URL: https://github.com/apache/hbase/pull/1778 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-native-client] phrocker commented on a change in pull request #2: HBASE-24400: Fixup cmake infrastructure to allow dependencies to be built locally
phrocker commented on a change in pull request #2: URL: https://github.com/apache/hbase-native-client/pull/2#discussion_r430331425 ## File path: cmake/DownloadFolly.cmake ## @@ -0,0 +1,39 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +## Download facebook's folly library. +## SOURCE_DIR is typically the cmake source directory +## BINARY_DIR is the build directory, typically 'build' + +function(download_folly SOURCE_DIR BINARY_DIR) + + + ExternalProject_Add( + facebook-folly-proj + GIT_REPOSITORY "https://github.com/facebook/folly.git; + GIT_TAG "v2020.05.18.00" Review comment: Yep. I'll take a look. I updated folly ( which resulted in a lot of other changes ) because of this ` https://github.com/facebook/folly/blob/v2017.09.04.00/CMakeLists.txt#L24` My preference was to use CMake if and when possible but it definitely becomes difficult as we aim to control the dependency tree. ## File path: cmake/DownloadFolly.cmake ## @@ -0,0 +1,39 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +## Download facebook's folly library. +## SOURCE_DIR is typically the cmake source directory +## BINARY_DIR is the build directory, typically 'build' + +function(download_folly SOURCE_DIR BINARY_DIR) + + + ExternalProject_Add( + facebook-folly-proj + GIT_REPOSITORY "https://github.com/facebook/folly.git; + GIT_TAG "v2020.05.18.00" + SOURCE_DIR "${BINARY_DIR}/dependencies/facebook-folly-proj-src" + CMAKE_ARGS ${PASSTHROUGH_CMAKE_ARGS} Review comment: Yeah version is an issue I'd like to address with cmake getting a known tag. I manually created a version.h for the time being. I started working on a solution for this, but then the size of the PR would grow even more. My hope is to leverage an internal java project that runs a mini cluster and relies on maven to build a jar we can run for integration tests. Right now there is a relative path expecting classpath information to be valid. libfmt issues are one I think I need to address. In the case of gtest did you use libgtest-dev? My hope is to control these ( like I started doing with Sodium ). The ultimate issue I would hope to avoid is not being in control of the dependency tree. I think having a build mode for libhabaseclient.so that allows for static linking ( with perhaps GLIB as the only dependency ) will allow better control over not only the dependency tree, but also allow the client to be used across a variety of systems without regard to dependencies on those machines. This would also allow python bindings to eventually be a breeze and they can be distributed via pypi with relative ease. Finally, RE the copy_version [1] agree it's reliant on a relative path that probably won't exist for most people. A more sustainable solution would be to have a variable specifying an hbase release target, and we generate the version file from the tag. I think for the non-draft PR I would probably make the dependency reliance (BUILD_FB_DEPENDENCIES, BUILD_ZOOKEEPER) off by default. That would help to shrink the PR some as the need to download all dependencies and solve every problem is minimized. [1] https://github.com/apache/hbase-native-client/blob/master/bin/copy-version.sh
[GitHub] [hbase] infraio merged pull request #1780: HBASE-24433 Add 2.2.5 to download page
infraio merged pull request #1780: URL: https://github.com/apache/hbase/pull/1780 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on pull request #1771: HBASE-24425 Run hbck_chore_run and catalogjanitor_run on draw of 'HBC…
saintstack commented on pull request #1771: URL: https://github.com/apache/hbase/pull/1771#issuecomment-634226756 Address @virajjasani suggestions. @HorizonNet I pushed against branch-2 since that is what I know and its stable will forward port when all is good. Thanks for the reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ddupg opened a new pull request #1782: HBASE-24431 RSGroupInfo add configuration map to store something extra
ddupg opened a new pull request #1782: URL: https://github.com/apache/hbase/pull/1782 JIRA: https://issues.apache.org/jira/browse/HBASE-24431 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun opened a new pull request #1776: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun opened a new pull request #1776: URL: https://github.com/apache/hbase/pull/1776 …ually over laps. (#1741) Signed-off-by: stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1748: HBASE-22700 (addendum): Clarify ZK session timeout doc
Apache-HBase commented on pull request #1748: URL: https://github.com/apache/hbase/pull/1748#issuecomment-634355041 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
infraio commented on a change in pull request #1746: URL: https://github.com/apache/hbase/pull/1746#discussion_r430349196 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java ## @@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException { // Start the Assignment Thread startAssignmentThread(); -// load meta region state -ZKWatcher zkw = master.getZooKeeper(); -// it could be null in some tests -if (zkw != null) { - RegionState regionState = MetaTableLocator.getMetaRegionState(zkw); - RegionStateNode regionNode = - regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO); - regionNode.lock(); - try { -regionNode.setRegionLocation(regionState.getServerName()); -regionNode.setState(regionState.getState()); -if (regionNode.getProcedure() != null) { - regionNode.getProcedure().stateLoaded(this, regionNode); +// load meta region states. +// notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new +// replicas, or remove excess replicas. +try (RegionScanner scanner = + localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) { + List cells = new ArrayList<>(); + boolean moreRows; + do { +moreRows = scanner.next(cells); +if (cells.isEmpty()) { + continue; } -setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN); - } finally { -regionNode.unlock(); +Result result = Result.create(cells); +cells.clear(); +RegionStateStore + .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> { +RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo); +regionNode.lock(); +try { + regionNode.setState(state); + regionNode.setLastHost(lastHost); Review comment: Seems the old code didn't setLastHost and setOpenSeqNum? Why add this? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java ## @@ -43,73 +49,103 @@ private final HMaster master; - public MasterMetaBootstrap(HMaster master) { + private final LocalStore localStore; + + public MasterMetaBootstrap(HMaster master, LocalStore localStore) { this.master = master; +this.localStore = localStore; } /** * For assigning hbase:meta replicas only. - * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same - * server as the hbase:meta region. */ - void assignMetaReplicas() - throws IOException, InterruptedException, KeeperException { + void assignMetaReplicas() throws IOException, InterruptedException, KeeperException { int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM, - HConstants.DEFAULT_META_REPLICA_NUM); -if (numReplicas <= 1) { - // No replicaas to assign. Return. - return; -} -final AssignmentManager assignmentManager = master.getAssignmentManager(); -if (!assignmentManager.isMetaLoaded()) { - throw new IllegalStateException("hbase:meta must be initialized first before we can " + - "assign out its replicas"); -} -ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper()); -for (int i = 1; i < numReplicas; i++) { - // Get current meta state for replica from zk. - RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i); - RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica( - RegionInfoBuilder.FIRST_META_REGIONINFO, i); - LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState); - if (metaServername.equals(metaState.getServerName())) { -metaState = null; -LOG.info(hri.getRegionNameAsString() + - " old location is same as current hbase:meta location; setting location as null..."); + HConstants.DEFAULT_META_REPLICA_NUM); +// only try to assign meta replicas when there are more than 1 replicas +if (numReplicas > 1) { + final AssignmentManager am = master.getAssignmentManager(); + if (!am.isMetaLoaded()) { +throw new IllegalStateException( + "hbase:meta must be initialized first before we can " + "assign out its replicas"); } - // These assigns run inline. All is blocked till they complete. Only interrupt is shutting - // down hosting server which calls AM#stop. - if (metaState != null && metaState.getServerName() != null) { -// Try to retain old assignment. -assignmentManager.assign(hri, metaState.getServerName()); - } else { -assignmentManager.assign(hri); +
[GitHub] [hbase] huaxiangsun merged pull request #1777: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun merged pull request #1777: URL: https://github.com/apache/hbase/pull/1777 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
Apache-HBase commented on pull request #1746: URL: https://github.com/apache/hbase/pull/1746#issuecomment-633781860 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun merged pull request #1768: Backport: HBASE-24369 Provide more information about merged child regions in Hb…
huaxiangsun merged pull request #1768: URL: https://github.com/apache/hbase/pull/1768 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] HorizonNet commented on a change in pull request #1780: HBASE-24433 Add 2.2.5 to download page
HorizonNet commented on a change in pull request #1780: URL: https://github.com/apache/hbase/pull/1780#discussion_r430222936 ## File path: src/site/xdoc/downloads.xml ## @@ -45,24 +45,24 @@ under the License. -2.2.4 +2.2.5 -2020/03/11 +2020/05/21 -https://downloads.apache.org/hbase/2.2.4/api_compare_2.2.4RC0_to_2.2.3.html;>2.2.4 vs 2.2.3 +https://apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html;>2.2.5 vs 2.2.4 Review comment: This link doesn't seem to exist. ## File path: src/site/xdoc/downloads.xml ## @@ -45,24 +45,24 @@ under the License. -2.2.4 +2.2.5 -2020/03/11 +2020/05/21 -https://downloads.apache.org/hbase/2.2.4/api_compare_2.2.4RC0_to_2.2.3.html;>2.2.4 vs 2.2.3 +https://apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html;>2.2.5 vs 2.2.4 Review comment: Ok, let me check it later again. ## File path: src/site/xdoc/downloads.xml ## @@ -45,24 +45,24 @@ under the License. -2.2.4 +2.2.5 -2020/03/11 +2020/05/21 -https://downloads.apache.org/hbase/2.2.4/api_compare_2.2.4RC0_to_2.2.3.html;>2.2.4 vs 2.2.3 +https://apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html;>2.2.5 vs 2.2.4 Review comment: I just checked it again. I think https://downloads.apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html is the correct link. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1775: fix building cpp-example DemoClient
Apache-HBase commented on pull request #1775: URL: https://github.com/apache/hbase/pull/1775#issuecomment-633683883 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1782: HBASE-24431 RSGroupInfo add configuration map to store something extra
Apache-HBase commented on pull request #1782: URL: https://github.com/apache/hbase/pull/1782#issuecomment-634017992 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv merged pull request #1767: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bharathv merged pull request #1767: URL: https://github.com/apache/hbase/pull/1767 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1711: HBASE-24371 Add more details when print CompactionConfiguration info
Apache-HBase commented on pull request #1711: URL: https://github.com/apache/hbase/pull/1711#issuecomment-634090420 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] javierluca commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
javierluca commented on a change in pull request #1781: URL: https://github.com/apache/hbase/pull/1781#discussion_r430833826 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java ## @@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() { return FSDataInputStreamWrapper.getZeroCopyBytesRead(); } + @Override + public long getHedgedReadOps() { +return this.dfsHedgedReadMetrics == null? 0 : this.dfsHedgedReadMetrics.getHedgedReadOps(); Review comment: sorry about that, fixing This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] javierluca edited a comment on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
javierluca edited a comment on pull request #1781: URL: https://github.com/apache/hbase/pull/1781#issuecomment-633963904 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #1787: HBASE-24437 Addendum just start mini dfs cluster, and make the log ro…
Apache9 commented on pull request #1787: URL: https://github.com/apache/hbase/pull/1787#issuecomment-634385918 @saintstack FYI. Anyway, the requestRollAll has been used at some critical places, such as in sync replication, for keeping data consistent between two clusters, so if it is not stable enough, it will be a problem... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #1755: HBASE-24069 Provide an ExponentialBackOffPolicy sleep between failed …
apurtell commented on a change in pull request #1755: URL: https://github.com/apache/hbase/pull/1755#discussion_r430708920 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1972,6 +1975,13 @@ private void unassign(final HRegionInfo region, final RegionState state, final int versionOfClosingNode, final ServerName dest, final boolean transitionInZK, final ServerName src) { +String encodedName = region.getEncodedName(); +AtomicInteger failedCloseCount = failedCloseTracker.get(encodedName); +if (failedCloseCount == null) { + failedCloseCount = new AtomicInteger(); + failedCloseTracker.put(encodedName, failedCloseCount); Review comment: We can race between test and put, potentially undercounting (by ref overwrite). Probably should test if the return of putIfAbsent is not null, and use/update that. If the code that updates `failedOpenTracker` also has this racy pattern it should be fixed too. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1972,6 +1975,13 @@ private void unassign(final HRegionInfo region, final RegionState state, final int versionOfClosingNode, final ServerName dest, final boolean transitionInZK, final ServerName src) { +String encodedName = region.getEncodedName(); +AtomicInteger failedCloseCount = failedCloseTracker.get(encodedName); +if (failedCloseCount == null) { + failedCloseCount = new AtomicInteger(); + failedCloseTracker.put(encodedName, failedCloseCount); Review comment: We can race between test and put, potentially undercounting (by ref overwrite). Test if the return of putIfAbsent is not null, and use/update that. It will either be the new atomic inserted via putIfAbsent or an existing atomic. _Then_ increment. If the code that updates `failedOpenTracker` also has this racy pattern it should be fixed too. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1972,6 +1975,13 @@ private void unassign(final HRegionInfo region, final RegionState state, final int versionOfClosingNode, final ServerName dest, final boolean transitionInZK, final ServerName src) { +String encodedName = region.getEncodedName(); +AtomicInteger failedCloseCount = failedCloseTracker.get(encodedName); +if (failedCloseCount == null) { + failedCloseCount = new AtomicInteger(); + failedCloseTracker.put(encodedName, failedCloseCount); Review comment: We can race between test and put, potentially undercounting (by ref overwrite). Test if the return of putIfAbsent is not null. If not null, use that instead of the atomic instance you just created. _Then_ increment. If the code that updates `failedOpenTracker` also has this racy pattern it should be fixed too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on pull request #1770: HBASE-24416 RegionNormalizer spliting region should not be limited by…
infraio commented on pull request #1770: URL: https://github.com/apache/hbase/pull/1770#issuecomment-633788968 https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1770/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt generate a new checkstyle problem? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #1771: HBASE-24425 Run hbck_chore_run and catalogjanitor_run on draw of 'HBC…
saintstack commented on a change in pull request #1771: URL: https://github.com/apache/hbase/pull/1771#discussion_r430649459 ## File path: hbase-server/src/main/resources/hbase-webapps/master/hbck.jsp ## @@ -37,9 +37,17 @@ <%@ page import="org.apache.hadoop.hbase.util.Pair" %> <%@ page import="org.apache.hadoop.hbase.master.CatalogJanitor" %> <%@ page import="org.apache.hadoop.hbase.master.CatalogJanitor.Report" %> +<%@ page import="org.apache.hadoop.hbase.util.Threads" %> <% + final String cacheParameterValue = request.getParameter("cache"); + boolean cache = Boolean.valueOf(cacheParameterValue); final HMaster master = (HMaster) getServletContext().getAttribute(HMaster.MASTER); pageContext.setAttribute("pageTitle", "HBase Master HBCK Report: " + master.getServerName()); + if (!cache) { +// Run the two reporters inline w/ drawing of the page. If exception, will show in page draw. +master.getMasterRpcServices().runHbckChore(null, null); +master.getMasterRpcServices().runCatalogScan(null, null); Review comment: Yeah. Was thinking about it. Though the rare SE not the end of the world. Added something at your prompting it looks like this when generated... ``` if (!Boolean.parseBoolean(cacheParameterValue)) { // Run the two reporters inline w/ drawing of the page. If exception, will show in page draw. try { master.getMasterRpcServices().runHbckChore(null, null); } catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException se) { out.write("Failed generating a new hbck_chore report; using cache; try again or run hbck_chore_run in the shell: " + se.getMessage() + "\n"); } try { master.getMasterRpcServices().runCatalogScan(null, null); } catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException se) { out.write("Failed generating a new catalogjanitor report; using cache; try again or run catalogjanitor_run in the shell: " + se.getMessage() + "\n"); } } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
Apache-HBase commented on pull request #1781: URL: https://github.com/apache/hbase/pull/1781#issuecomment-634106442 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 7m 15s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 42m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 52m 53s | branch-1 passed | | +1 :green_heart: | compile | 1m 10s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | compile | 1m 19s | branch-1 passed with JDK v1.7.0_262 | | +1 :green_heart: | checkstyle | 2m 11s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 3s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 8s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 1m 17s | branch-1 passed with JDK v1.7.0_262 | | +0 :ok: | spotbugs | 2m 36s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 0s | branch-1 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | the patch passed | | +1 :green_heart: | compile | 1m 10s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javac | 1m 10s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | javac | 1m 18s | the patch passed | | -1 :x: | checkstyle | 1m 32s | hbase-server: The patch generated 5 new + 92 unchanged - 1 fixed = 97 total (was 93) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 49s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 37s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 1m 16s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | findbugs | 4m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 29s | hbase-hadoop-compat in the patch passed. | | +1 :green_heart: | unit | 0m 38s | hbase-hadoop2-compat in the patch passed. | | +1 :green_heart: | unit | 127m 19s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 1m 40s | The patch does not generate ASF License warnings. | | | | 271m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1781 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a431e2868dee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh | | git revision | branch-1 / 3235b56 | | Default Java | 1.7.0_262 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/artifact/out/diff-checkstyle-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/testReport/ | | Max. process+thread count | 4055 (vs. ulimit of 1) | | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the
[GitHub] [hbase] ndimiduk commented on pull request #1786: HBASE-24418 Consolidate Normalizer implementations
ndimiduk commented on pull request #1786: URL: https://github.com/apache/hbase/pull/1786#issuecomment-634358652 I intend to backport this at least to branch-2.3. I think branch-2.2 needs some other patches before this would apply. PTAL, @saintstack @Apache9 @infraio @huaxiangsun @joshelser @mnpoonia @ddupg. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun merged pull request #1769: Backport: HBASE-24369 Provide more information about merged child regions in Hb…
huaxiangsun merged pull request #1769: URL: https://github.com/apache/hbase/pull/1769 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1764: HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
Apache-HBase commented on pull request #1764: URL: https://github.com/apache/hbase/pull/1764#issuecomment-634393134 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | master passed | | +1 :green_heart: | checkstyle | 1m 6s | master passed | | +1 :green_heart: | spotbugs | 1m 59s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | +1 :green_heart: | checkstyle | 1m 5s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 4s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 16s | The patch does not generate ASF License warnings. | | | | 35m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1764 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux fba8d258ae6e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a9205f8f4d | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1764/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack merged pull request #1771: HBASE-24425 Run hbck_chore_run and catalogjanitor_run on draw of 'HBC…
saintstack merged pull request #1771: URL: https://github.com/apache/hbase/pull/1771 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ramkrish86 commented on a change in pull request #1552: HBASE-24205 Create metric to know the number of reads that happens fr…
ramkrish86 commented on a change in pull request #1552: URL: https://github.com/apache/hbase/pull/1552#discussion_r430396089 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -2884,4 +2901,41 @@ public int getMaxCompactedStoreFileRefCount() { ? maxCompactedStoreFileRefCount.getAsInt() : 0; } + @Override + public long getReadRequestsFromStoreCount() { +return getRequestsFromStore.sum(); + } + + @Override + public long getGetRequestsCountFromMemstore() { +return getRequestsFromMemstore.sum(); + } + + @Override + public long getGetRequestsCountFromFile() { +return getRequestsFromFile.sum(); + } + + void incrGetRequestsFromStore() { +getRequestsFromStore.increment(); Review comment: The one direclty inder HStore is used by the Region level and Table level aggregators which deals with HStore. This gets printed periodically. The other one is at the MetricsStore level which is the real time one. For every request it will be displayed at JMX MBean level. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableWrapperAggregateImpl.java ## @@ -70,25 +69,36 @@ public void run() { localMetricsTableMap.put(tbl, mt); } if (r.getStores() != null) { + long memstoreReadCount = 0l; + long fileReadCount = 0l; + String familyName = null; for (Store store : r.getStores()) { +familyName = store.getColumnFamilyName(); + mt.storeFileCount += store.getStorefilesCount(); -mt.memstoreSize += (store.getMemStoreSize().getDataSize() + - store.getMemStoreSize().getHeapSize() + store.getMemStoreSize().getOffHeapSize()); +mt.memstoreSize += (store.getMemStoreSize().getDataSize() ++ store.getMemStoreSize().getHeapSize() + store.getMemStoreSize().getOffHeapSize()); mt.storeFileSize += store.getStorefilesSize(); mt.referenceFileCount += store.getNumReferenceFiles(); -mt.maxStoreFileAge = Math.max(mt.maxStoreFileAge, store.getMaxStoreFileAge().getAsLong()); -mt.minStoreFileAge = Math.min(mt.minStoreFileAge, store.getMinStoreFileAge().getAsLong()); -mt.totalStoreFileAge = (long)store.getAvgStoreFileAge().getAsDouble() * -store.getStorefilesCount(); +mt.maxStoreFileAge = +Math.max(mt.maxStoreFileAge, store.getMaxStoreFileAge().getAsLong()); +mt.minStoreFileAge = +Math.min(mt.minStoreFileAge, store.getMinStoreFileAge().getAsLong()); +mt.totalStoreFileAge = +(long) store.getAvgStoreFileAge().getAsDouble() * store.getStorefilesCount(); mt.storeCount += 1; +memstoreReadCount += store.getGetRequestsCountFromMemstore(); +fileReadCount += store.getGetRequestsCountFromFile(); +mt.storeMemstoreGetCount.putIfAbsent(familyName, memstoreReadCount); +mt.storeFileGetCount.putIfAbsent(familyName, fileReadCount); } + mt.regionCount += 1; mt.readRequestCount += r.getReadRequestsCount(); - mt.filteredReadRequestCount += getFilteredReadRequestCount(tbl.getNameAsString()); + mt.filteredReadRequestCount += r.getFilteredReadRequestsCount(); Review comment: This was wrong. It is a simple change. So I thought it is better to make this change hhere. If you are particular i can make the change in separete JIRA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
Apache9 commented on a change in pull request #1746: URL: https://github.com/apache/hbase/pull/1746#discussion_r430369088 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java ## @@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException { // Start the Assignment Thread startAssignmentThread(); -// load meta region state -ZKWatcher zkw = master.getZooKeeper(); -// it could be null in some tests -if (zkw != null) { - RegionState regionState = MetaTableLocator.getMetaRegionState(zkw); - RegionStateNode regionNode = - regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO); - regionNode.lock(); - try { -regionNode.setRegionLocation(regionState.getServerName()); -regionNode.setState(regionState.getState()); -if (regionNode.getProcedure() != null) { - regionNode.getProcedure().stateLoaded(this, regionNode); +// load meta region states. +// notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new +// replicas, or remove excess replicas. +try (RegionScanner scanner = + localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) { + List cells = new ArrayList<>(); + boolean moreRows; + do { +moreRows = scanner.next(cells); +if (cells.isEmpty()) { + continue; } -setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN); - } finally { -regionNode.unlock(); +Result result = Result.create(cells); +cells.clear(); +RegionStateStore + .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> { +RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo); +regionNode.lock(); +try { + regionNode.setState(state); + regionNode.setLastHost(lastHost); + regionNode.setRegionLocation(regionLocation); + regionNode.setOpenSeqNum(openSeqNum); + if (regionNode.getProcedure() != null) { +regionNode.getProcedure().stateLoaded(this, regionNode); + } + if (RegionReplicaUtil.isDefaultReplica(regionInfo)) { +setMetaAssigned(regionInfo, state == State.OPEN); + } +} finally { + regionNode.unlock(); +} +if (regionInfo.isFirst()) { + // for compatibility, mirror the meta region state to zookeeper Review comment: For communication with old clients, they will load the meta location from zookeeper. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java ## @@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException { // Start the Assignment Thread startAssignmentThread(); -// load meta region state -ZKWatcher zkw = master.getZooKeeper(); -// it could be null in some tests -if (zkw != null) { - RegionState regionState = MetaTableLocator.getMetaRegionState(zkw); - RegionStateNode regionNode = - regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO); - regionNode.lock(); - try { -regionNode.setRegionLocation(regionState.getServerName()); -regionNode.setState(regionState.getState()); -if (regionNode.getProcedure() != null) { - regionNode.getProcedure().stateLoaded(this, regionNode); +// load meta region states. +// notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new +// replicas, or remove excess replicas. +try (RegionScanner scanner = + localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) { + List cells = new ArrayList<>(); + boolean moreRows; + do { +moreRows = scanner.next(cells); +if (cells.isEmpty()) { + continue; } -setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN); - } finally { -regionNode.unlock(); +Result result = Result.create(cells); +cells.clear(); +RegionStateStore + .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> { +RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo); +regionNode.lock(); +try { + regionNode.setState(state); + regionNode.setLastHost(lastHost); Review comment: It's just because we do not have these fields in the protobuf message which is stored on zookeeper... ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java ## @@ -43,73 +49,103 @@ private
[GitHub] [hbase] Apache9 opened a new pull request #1787: HBASE-24437 Addendum just start mini dfs cluster, and make the log ro…
Apache9 opened a new pull request #1787: URL: https://github.com/apache/hbase/pull/1787 …ll more robust This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio merged pull request #1770: HBASE-24416 RegionNormalizer spliting region should not be limited by…
infraio merged pull request #1770: URL: https://github.com/apache/hbase/pull/1770 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun opened a new pull request #1777: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun opened a new pull request #1777: URL: https://github.com/apache/hbase/pull/1777 …ually over laps. (#1741) Signed-off-by: stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1783: HBASE-24436 The store file open and close thread pool should be share…
Apache-HBase commented on pull request #1783: URL: https://github.com/apache/hbase/pull/1783#issuecomment-634165067 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun opened a new pull request #1785: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun opened a new pull request #1785: URL: https://github.com/apache/hbase/pull/1785 …ually over laps. (#1741) Signed-off-by: stack This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #1748: HBASE-22700 (addendum): Clarify ZK session timeout doc
apurtell commented on a change in pull request #1748: URL: https://github.com/apache/hbase/pull/1748#discussion_r430742337 ## File path: src/main/asciidoc/_chapters/schema_design.adoc ## @@ -1142,7 +1142,11 @@ Disable Nagle’s algorithm. Delayed ACKs can add up to ~200ms to RPC round trip Detect regionserver failure as fast as reasonable. Set the following parameters: * In `hbase-site.xml`, set `zookeeper.session.timeout` to 30 seconds or less to bound failure detection (20-30 seconds is a good start). -- Notice: the `sessionTimeout` of zookeeper is limited between 2 times and 20 times the `tickTime`(the basic time unit in milliseconds used by ZooKeeper.the default value is 2000ms.It is used to do heartbeats and the minimum session timeout will be twice the tickTime). +- Note: Zookeeper clients negotiate a session timeout with the server during client init. Server enforces this timeout to be in the +range [`minSessionTimeout`, `maxSessionTimeout`] and both these timeouts are configurable in Zookeeper service configuration. Review comment: What are these? Milliseconds? We have to assume it otherwise, so why not mention it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #1774: HBASE-24389 Introduce a new master rpc service to locate meta region through root region
Apache9 commented on pull request #1774: URL: https://github.com/apache/hbase/pull/1774#issuecomment-633619575 Based on #1746 . Still has some problems need to be fixed in this issue and also some problems should be done in follow-on issues. The main problem here is that we have a TEST_SKIP_REPORTING_TRANSITION in the past but now meta assignment must go through master, need to investigate the related tests(TestRegionServerNoMaster and some region replicas test) more to find out a solution. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on pull request #1776: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun commented on pull request #1776: URL: https://github.com/apache/hbase/pull/1776#issuecomment-634162602 The test failure is about precommit checks / yetus jdk8 Hadoop2 checks / org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompactFailing for the past 1 build (Since #1 )Took 17 sec.Error MessageWaiting timed out after [15,000] msecStacktracejava.lang.AssertionError: Waiting timed out after [15,000] msec at org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompact(TestLocalRegionOnTwoFileSystems.java:178) Standard OutputFormatting using clusterid: testClusterID Standard Error2020-05-26 00:26:29,624 INFO [main] hbase.HBaseClassTestRule(94): Test class Which is a flaky, I created HBASE-24437 to track it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv opened a new pull request #1778: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bharathv opened a new pull request #1778: URL: https://github.com/apache/hbase/pull/1778 …l get lock too Signed-off-by: Bharath Vissapragada This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1787: HBASE-24437 Addendum just start mini dfs cluster, and make the log ro…
Apache-HBase commented on pull request #1787: URL: https://github.com/apache/hbase/pull/1787#issuecomment-634397535 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | master passed | | +1 :green_heart: | checkstyle | 1m 5s | master passed | | +1 :green_heart: | spotbugs | 2m 1s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | +1 :green_heart: | checkstyle | 1m 4s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 8s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 32m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1787/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1787 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 4e5e48b8cb82 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a9205f8f4d | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1787/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1774: HBASE-24389 Introduce a new master rpc service to locate meta region through root region
Apache-HBase commented on pull request #1774: URL: https://github.com/apache/hbase/pull/1774#issuecomment-633630473 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv opened a new pull request #1779: HBASE-24423 No need to get lock in canSplit because hasReferences wil…
bharathv opened a new pull request #1779: URL: https://github.com/apache/hbase/pull/1779 …l get lock too Signed-off-by: Bharath Vissapragada This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Joseph295 opened a new pull request #1783: HBASE-24436 The store file open and close thread pool should be share…
Joseph295 opened a new pull request #1783: URL: https://github.com/apache/hbase/pull/1783 …d at the region level This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] javierluca opened a new pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
javierluca opened a new pull request #1781: URL: https://github.com/apache/hbase/pull/1781 https://issues.apache.org/jira/browse/HBASE-24435 Conflicts: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java Cherry-picked https://github.com/apache/hbase/commit/71ed7033675149956de855b6782e1e22fc908dc8 with just a few adjustments to adapt it to current branch-1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24388) Store the locations of meta regions in master local store
[ https://issues.apache.org/jira/browse/HBASE-24388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24388. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to branch HBASE-11288.spliitable-meta. Thanks [~stack] and [~zghao] for reviewing. > Store the locations of meta regions in master local store > - > > Key: HBASE-24388 > URL: https://issues.apache.org/jira/browse/HBASE-24388 > Project: HBase > Issue Type: Sub-task > Components: meta, Region Assignment >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: HBASE-11288 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bharathv commented on a change in pull request #1775: fix building cpp-example DemoClient
bharathv commented on a change in pull request #1775: URL: https://github.com/apache/hbase/pull/1775#discussion_r430167887 ## File path: hbase-examples/src/main/cpp/DemoClient.cpp ## @@ -79,15 +78,15 @@ main(int argc, char** argv) return -1; } bool isFramed = false; - boost::shared_ptr socket(new TSocket(argv[1], boost::lexical_cast(argv[2]))); - boost::shared_ptr transport; + std::shared_ptr socket(new TSocket(argv[1], std::stoi(argv[2]))); Review comment: nit: switch to make_shared while you are here? ## File path: hbase-examples/src/main/cpp/DemoClient.cpp ## @@ -79,15 +78,15 @@ main(int argc, char** argv) return -1; } bool isFramed = false; - boost::shared_ptr socket(new TSocket(argv[1], boost::lexical_cast(argv[2]))); - boost::shared_ptr transport; + std::shared_ptr socket(std::make_shared(argv[1], std::stoi(argv[2]))); Review comment: I think you mean socket = make_shared. With your code, I think it calls into the move constructor.. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1784: HBASE-24428 : Update compaction priority for recently split daughter …
Apache-HBase commented on pull request #1784: URL: https://github.com/apache/hbase/pull/1784#issuecomment-634180775 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] alexandermyasnikov commented on a change in pull request #1775: fix building cpp-example DemoClient
alexandermyasnikov commented on a change in pull request #1775: URL: https://github.com/apache/hbase/pull/1775#discussion_r430204749 ## File path: hbase-examples/src/main/cpp/DemoClient.cpp ## @@ -79,15 +78,15 @@ main(int argc, char** argv) return -1; } bool isFramed = false; - boost::shared_ptr socket(new TSocket(argv[1], boost::lexical_cast(argv[2]))); - boost::shared_ptr transport; + std::shared_ptr socket(new TSocket(argv[1], std::stoi(argv[2]))); Review comment: ok This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sguggilam commented on a change in pull request #1755: HBASE-24069 Provide an ExponentialBackOffPolicy sleep between failed …
sguggilam commented on a change in pull request #1755: URL: https://github.com/apache/hbase/pull/1755#discussion_r430780670 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -1997,80 +2007,76 @@ private void unassign(final HRegionInfo region, } try { // Send CLOSE RPC -if (serverManager.sendRegionClose(server, region, - versionOfClosingNode, dest, transitionInZK)) { - LOG.debug("Sent CLOSE to " + server + " for region " + -region.getRegionNameAsString()); +if (serverManager.sendRegionClose(server, region, versionOfClosingNode, dest, + transitionInZK)) { + LOG.debug("Sent CLOSE to " + server + " for region " + region.getRegionNameAsString()); if (useZKForAssignment && !transitionInZK && state != null) { // Retry to make sure the region is // closed so as to avoid double assignment. -unassign(region, state, versionOfClosingNode, - dest, transitionInZK, src); +unassign(region, state, versionOfClosingNode, dest, transitionInZK, src); } return; } // This never happens. Currently regionserver close always return true. // Todo; this can now happen (0.96) if there is an exception in a coprocessor -LOG.warn("Server " + server + " region CLOSE RPC returned false for " + - region.getRegionNameAsString()); +LOG.warn("Server " + server + " region CLOSE RPC returned false for " ++ region.getRegionNameAsString()); } catch (Throwable t) { long sleepTime = 0; Configuration conf = this.server.getConfiguration(); if (t instanceof RemoteException) { - t = ((RemoteException)t).unwrapRemoteException(); + t = ((RemoteException) t).unwrapRemoteException(); } boolean logRetries = true; -if (t instanceof RegionServerAbortedException -|| t instanceof RegionServerStoppedException +if (t instanceof RegionServerAbortedException || t instanceof RegionServerStoppedException || t instanceof ServerNotRunningYetException) { // RS is aborting or stopping, we cannot offline the region since the region may need - // to do WAL recovery. Until we see the RS expiration, we should retry. + // to do WAL recovery. Until we see the RS expiration, we should retry. sleepTime = 1L + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); } else if (t instanceof NotServingRegionException) { - LOG.debug("Offline " + region.getRegionNameAsString() -+ ", it's not any more on " + server, t); + LOG.debug( +"Offline " + region.getRegionNameAsString() + ", it's not any more on " + server, t); if (transitionInZK) { deleteClosingOrClosedNode(region, server); } if (state != null) { regionOffline(region); } return; -} else if ((t instanceof FailedServerException) || (state != null && -t instanceof RegionAlreadyInTransitionException)) { - if (t instanceof FailedServerException) { -sleepTime = 1L + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, +} else if ((t instanceof FailedServerException) Review comment: Yes, there is no change in this section ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -2079,16 +2085,29 @@ private void unassign(final HRegionInfo region, } if (logRetries) { - LOG.info("Server " + server + " returned " + t + " for " -+ region.getRegionNameAsString() + ", try=" + i -+ " of " + this.maximumAttempts, t); + LOG.info("Server " + server + " returned " + t + " for " + region.getRegionNameAsString() + + ", try=" + i + " of " + this.maximumAttempts, +t); // Presume retry or server will expire. } } } -// Run out of attempts -if (state != null) { - regionStates.updateRegionState(region, State.FAILED_CLOSE); + +long sleepTime = backoffPolicy.getBackoffTime(retryConfig, Review comment: The idea is to use the exponential backoff configs "hbase.assignment.retry.sleep.initial" and "hbase.assignment.retry.sleep.initial" for backoff between retries as they can be exhausted pretty fast in case where the server is loaded /busy and cannot really even acknowledge the region close request from the master. We need to use them to schedule the retry at a later point in a different thread asynchronouly The sleepTime is not really meant for this use case and is not reading any exponential backoff configs ## File path:
[GitHub] [hbase] Apache-HBase commented on pull request #1770: HBASE-24416 RegionNormalizer spliting region should not be limited by…
Apache-HBase commented on pull request #1770: URL: https://github.com/apache/hbase/pull/1770#issuecomment-633598752 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1761: HBASE-21406 "status 'replication'" should not show SINK if the cluste…
Apache-HBase commented on pull request #1761: URL: https://github.com/apache/hbase/pull/1761#issuecomment-633696141 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ddupg commented on pull request #1770: HBASE-24416 RegionNormalizer spliting region should not be limited by…
ddupg commented on pull request #1770: URL: https://github.com/apache/hbase/pull/1770#issuecomment-633773975 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1773: HBASE-24427 HStore.add log format error
Apache-HBase commented on pull request #1773: URL: https://github.com/apache/hbase/pull/1773#issuecomment-633609182 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #1783: HBASE-24436 The store file open and close thread pool should be share…
apurtell commented on a change in pull request #1783: URL: https://github.com/apache/hbase/pull/1783#discussion_r430737665 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java ## @@ -695,6 +695,7 @@ void sawNoSuchFamily() { private RegionCoprocessorHost coprocessorHost; private TableDescriptor htableDescriptor = null; + private ThreadPoolExecutor storeFileOpenAndCloseThreadPool; Review comment: Why is this shared at the Region level. Shouldn't it be at the Store level (in HStore)? What does sharing at the region level gain us? Are you attempting to evenly round-robin store open work over all opening stores in the region? Just sharing an executor at region level won't do this. If the underlying stores are skewed the order in which runnables are submitted to the executor will share the skew. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani opened a new pull request #1784: HBASE-24428 : Update compaction priority for recently split daughter …
virajjasani opened a new pull request #1784: URL: https://github.com/apache/hbase/pull/1784 …regions This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] HorizonNet commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics
HorizonNet commented on pull request #1781: URL: https://github.com/apache/hbase/pull/1781#issuecomment-633959507 Should the conflicts line be part of the commit message? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-native-client] bharathv commented on a change in pull request #2: HBASE-24400: Fixup cmake infrastructure to allow dependencies to be built locally
bharathv commented on a change in pull request #2: URL: https://github.com/apache/hbase-native-client/pull/2#discussion_r430720072 ## File path: cmake/DownloadFolly.cmake ## @@ -0,0 +1,39 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +## Download facebook's folly library. +## SOURCE_DIR is typically the cmake source directory +## BINARY_DIR is the build directory, typically 'build' + +function(download_folly SOURCE_DIR BINARY_DIR) + + + ExternalProject_Add( + facebook-folly-proj + GIT_REPOSITORY "https://github.com/facebook/folly.git; + GIT_TAG "v2020.05.18.00" Review comment: Agreed. ## File path: cmake/DownloadFolly.cmake ## @@ -0,0 +1,39 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +## Download facebook's folly library. +## SOURCE_DIR is typically the cmake source directory +## BINARY_DIR is the build directory, typically 'build' + +function(download_folly SOURCE_DIR BINARY_DIR) + + + ExternalProject_Add( + facebook-folly-proj + GIT_REPOSITORY "https://github.com/facebook/folly.git; + GIT_TAG "v2020.05.18.00" + SOURCE_DIR "${BINARY_DIR}/dependencies/facebook-folly-proj-src" + CMAKE_ARGS ${PASSTHROUGH_CMAKE_ARGS} Review comment: > Yeah version is an issue I'd like to address with cmake getting a known tag. I manually created a version.h for the time being. I started working on a solution for this, but then the size of the PR would grow even more. > Finally, RE the copy_version [1] agree it's reliant on a relative path that probably won't exist for most people. A more sustainable solution would be to have a variable specifying an hbase release target, and we generate the version file from the tag. Thanks. To keep things simple, IMHO it is also reasonable to pull the version.h from hbase source in the parent directory (basically the current approach). Just that mvn compilation in hbase project is not generating that header file. Let me know if you think this is a reasonable approach and I can fix it in a separate patch and you can integrate it with your change (that'll keep your changes simple for now). > My hope is to leverage an internal java project that runs a mini cluster and relies on maven to build a jar we can run for integration tests. Right now there is a relative path expecting class-path information to be valid. Agreed. > libfmt issues are one I think I need to address. In the case of gtest did you use libgtest-dev? Yes, I was using the dev library. I think this is a known problem, see the second answer on this page. https://stackoverflow.com/questions/13513905/how-to-set-up-googletest-as-a-shared-library-on-linux > but also allow the client to be used across a variety of systems without regard to dependencies on those machines. +1. I think an ideal end state would be a prebuilt toolchain for various commonly used platforms we just download them on the fly during compilation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24437) Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact
[ https://issues.apache.org/jira/browse/HBASE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24437. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed the addendum to branch-2.3+. Thanks [~stack] for reviewing. > Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact > --- > > Key: HBASE-24437 > URL: https://issues.apache.org/jira/browse/HBASE-24437 > Project: HBase > Issue Type: Bug > Components: meta, test >Reporter: Huaxiang Sun >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: > 0001-HBASE-24437-Flaky-test-TestLocalRegionOnTwoFileSyste.patch > > > {code:java} > precommit checks / yetus jdk8 Hadoop2 checks / > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompactFailing > for the past 1 build (Since #1 )Took 17 sec.Error MessageWaiting timed out > after [15,000] msecStacktracejava.lang.AssertionError: Waiting timed out > after [15,000] msec > at > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompact(TestLocalRegionOnTwoFileSystems.java:178) > Standard OutputFormatting using clusterid: testClusterID > Standard Error2020-05-26 00:26:29,624 INFO [main] > hbase.HBaseClassTestRule(94): Test class > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems timeout: > 13 mins > 2020-05-26 00:26:30,158 DEBUG [main] hbase.HBaseTestingUtility(348): Setting > hbase.rootdir to > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf > 2020-05-26 00:26:30,231 INFO [Time-limited test] > hbase.HBaseTestingUtility(1114): Starting up minicluster with option: > StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, > rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, > createRootDir=false, createWALDir=false} > 2020-05-26 00:26:30,232 INFO [Time-limited test] > hbase.HBaseZKTestingUtility(83): Created new mini-cluster data directory: > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf/cluster_ddea42b7-f6f1-92fe-0685-674774d0fce2, > deleteOnExit=true > 2020-05-26 00:26:30,233 INFO [Time-limited test] > hbase.HBaseTestingUtility(1128): STARTING DFS {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 opened a new pull request #1774: HBASE-24389 Introduce a new master rpc service to locate meta region through root region
Apache9 opened a new pull request #1774: URL: https://github.com/apache/hbase/pull/1774 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1780: HBASE-24433 Add 2.2.5 to download page
infraio commented on a change in pull request #1780: URL: https://github.com/apache/hbase/pull/1780#discussion_r430250350 ## File path: src/site/xdoc/downloads.xml ## @@ -45,24 +45,24 @@ under the License. -2.2.4 +2.2.5 -2020/03/11 +2020/05/21 -https://downloads.apache.org/hbase/2.2.4/api_compare_2.2.4RC0_to_2.2.3.html;>2.2.4 vs 2.2.3 +https://apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html;>2.2.5 vs 2.2.4 Review comment: Yes. This may need some time to work. ## File path: src/site/xdoc/downloads.xml ## @@ -45,24 +45,24 @@ under the License. -2.2.4 +2.2.5 -2020/03/11 +2020/05/21 -https://downloads.apache.org/hbase/2.2.4/api_compare_2.2.4RC0_to_2.2.3.html;>2.2.4 vs 2.2.3 +https://apache.org/hbase/2.2.5/api_compare_2.2.5RC0_to_2.2.4.html;>2.2.5 vs 2.2.4 Review comment: My mistake... It exist now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1780: HBASE-24433 Add 2.2.5 to download page
Apache-HBase commented on pull request #1780: URL: https://github.com/apache/hbase/pull/1780#issuecomment-633868485 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun merged pull request #1776: Backport: HBASE-24379 CatalogJanitor misreports region holes when there are act…
huaxiangsun merged pull request #1776: URL: https://github.com/apache/hbase/pull/1776 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24437) Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact
[ https://issues.apache.org/jira/browse/HBASE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117282#comment-17117282 ] Hudson commented on HBASE-24437: Results for branch branch-2 [build #2677 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Flaky test, TestLocalRegionOnTwoFileSystems#testFlushAndCompact > --- > > Key: HBASE-24437 > URL: https://issues.apache.org/jira/browse/HBASE-24437 > Project: HBase > Issue Type: Bug > Components: meta, test >Reporter: Huaxiang Sun >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: > 0001-HBASE-24437-Flaky-test-TestLocalRegionOnTwoFileSyste.patch > > > {code:java} > precommit checks / yetus jdk8 Hadoop2 checks / > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompactFailing > for the past 1 build (Since #1 )Took 17 sec.Error MessageWaiting timed out > after [15,000] msecStacktracejava.lang.AssertionError: Waiting timed out > after [15,000] msec > at > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems.testFlushAndCompact(TestLocalRegionOnTwoFileSystems.java:178) > Standard OutputFormatting using clusterid: testClusterID > Standard Error2020-05-26 00:26:29,624 INFO [main] > hbase.HBaseClassTestRule(94): Test class > org.apache.hadoop.hbase.master.store.TestLocalRegionOnTwoFileSystems timeout: > 13 mins > 2020-05-26 00:26:30,158 DEBUG [main] hbase.HBaseTestingUtility(348): Setting > hbase.rootdir to > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf > 2020-05-26 00:26:30,231 INFO [Time-limited test] > hbase.HBaseTestingUtility(1114): Starting up minicluster with option: > StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, > rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, > createRootDir=false, createWALDir=false} > 2020-05-26 00:26:30,232 INFO [Time-limited test] > hbase.HBaseZKTestingUtility(83): Created new mini-cluster data directory: > /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1776/yetus-jdk8-hadoop2-check/src/hbase-server/target/test-data/f6a36278-321e-2b82-017a-bbe71410a0cf/cluster_ddea42b7-f6f1-92fe-0685-674774d0fce2, > deleteOnExit=true > 2020-05-26 00:26:30,233 INFO [Time-limited test] > hbase.HBaseTestingUtility(1128): STARTING DFS {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on pull request #1746: HBASE-24388 Store the locations of meta regions in master local store
Apache9 commented on pull request #1746: URL: https://github.com/apache/hbase/pull/1746#issuecomment-634105560 OK, good, all green. Any other concerns? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24423) No need to get lock in canSplit because hasReferences will get lock too
[ https://issues.apache.org/jira/browse/HBASE-24423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117283#comment-17117283 ] Hudson commented on HBASE-24423: Results for branch branch-2 [build #2677 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > No need to get lock in canSplit because hasReferences will get lock too > --- > > Key: HBASE-24423 > URL: https://issues.apache.org/jira/browse/HBASE-24423 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Minor > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24379) CatalogJanitor misreports region holes when there are actually over laps.
[ https://issues.apache.org/jira/browse/HBASE-24379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117281#comment-17117281 ] Hudson commented on HBASE-24379: Results for branch branch-2 [build #2677 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2677/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CatalogJanitor misreports region holes when there are actually over laps. > - > > Key: HBASE-24379 > URL: https://issues.apache.org/jira/browse/HBASE-24379 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > We found a case that there is actually an overlap, but a region hole is > reported. > r1: [aa, bb), r2: [cc, dd), r3: [a, cc) > > In this case, there are only overlaps from "a" to "d". However, hole (r1, r2) > is reported. -- This message was sent by Atlassian Jira (v8.3.4#803005)