[GitHub] [hbase] Apache-HBase commented on pull request #1916: HBASE-24546 CloneSnapshotProcedure unlimited retry
Apache-HBase commented on pull request #1916: URL: https://github.com/apache/hbase/pull/1916#issuecomment-654051757 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 15s | branch-2.2 passed | | +1 :green_heart: | compile | 0m 56s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 1m 19s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | branch-2.2 passed | | +0 :ok: | spotbugs | 3m 18s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 16s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 41s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | checkstyle | 1m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 0s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 25m 12s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | | +1 :green_heart: | findbugs | 3m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 179m 25s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 243m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1916/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1916 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux ebc0fd97eb30 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1916/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 4be5fe3154 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1916/3/testReport/ | | Max. process+thread count | 4531 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1916/3/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2022: HBASE-24665 all wal of RegionGroupingProvider together roll
Apache-HBase commented on pull request #2022: URL: https://github.com/apache/hbase/pull/2022#issuecomment-654059307 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 51s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 56s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 27s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 8m 17s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 51s | hbase-server in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 20s | the patch passed | | +1 :green_heart: | compile | 1m 25s | the patch passed | | +1 :green_heart: | javac | 1m 25s | the patch passed | | +1 :green_heart: | shadedjars | 8m 6s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 54s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 133m 41s | hbase-server in the patch passed. | | | | 169m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2022 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f38076aaf57c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 9af91d4e1f | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/testReport/ | | Max. process+thread count | 4380 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2022: HBASE-24665 all wal of RegionGroupingProvider together roll
Apache-HBase commented on pull request #2022: URL: https://github.com/apache/hbase/pull/2022#issuecomment-654059767 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 54s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 11s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 6m 46s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 9s | the patch passed | | +1 :green_heart: | compile | 1m 9s | the patch passed | | +1 :green_heart: | javac | 1m 9s | the patch passed | | +1 :green_heart: | shadedjars | 6m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 144m 27s | hbase-server in the patch passed. | | | | 171m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2022 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux cc4040d9ae62 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 9af91d4e1f | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/testReport/ | | Max. process+thread count | 3538 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2022/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1900: HBASE-24546 CloneSnapshotProcedure unlimited retry
Apache-HBase commented on pull request #1900: URL: https://github.com/apache/hbase/pull/1900#issuecomment-654061353 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 10s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 6m 34s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 48s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 49s | the patch passed | | +1 :green_heart: | compile | 1m 15s | the patch passed | | +1 :green_heart: | javac | 1m 15s | the patch passed | | +1 :green_heart: | shadedjars | 6m 47s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 49s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 223m 49s | hbase-server in the patch passed. | | | | 254m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1900 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6a2aef1b59ff 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fe2ae809d1 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/testReport/ | | Max. process+thread count | 3264 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2023: HBASE-24665 all wal of RegionGroupingProvider together roll
Apache-HBase commented on pull request #2023: URL: https://github.com/apache/hbase/pull/2023#issuecomment-654063612 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 52s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-1.4 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 34s | branch-1.4 passed | | +1 :green_heart: | compile | 0m 50s | branch-1.4 passed with JDK v1.8.0_252 | | +1 :green_heart: | compile | 1m 2s | branch-1.4 passed with JDK v1.7.0_262 | | +1 :green_heart: | checkstyle | 2m 11s | branch-1.4 passed | | +1 :green_heart: | shadedjars | 3m 36s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | branch-1.4 passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 0m 49s | branch-1.4 passed with JDK v1.7.0_262 | | +0 :ok: | spotbugs | 3m 46s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 43s | branch-1.4 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 29s | the patch passed | | +1 :green_heart: | compile | 0m 51s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javac | 0m 51s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | checkstyle | 2m 3s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 3m 25s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 2m 49s | Patch does not cause any errors with Hadoop 2.7.7. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 0m 49s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | findbugs | 3m 32s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 124m 13s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | The patch does not generate ASF License warnings. | | | | 170m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2023/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2023 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 81107a638d46 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-2023/out/precommit/personality/provided.sh | | git revision | branch-1.4 / 1e6594f | | Default Java | 1.7.0_262 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2023/1/testReport/ | | Max. process+thread count | 4169 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2023/1/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1900: HBASE-24546 CloneSnapshotProcedure unlimited retry
Apache-HBase commented on pull request #1900: URL: https://github.com/apache/hbase/pull/1900#issuecomment-654066327 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | compile | 1m 2s | master passed | | +1 :green_heart: | shadedjars | 6m 14s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 40s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | shadedjars | 6m 13s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 238m 6s | hbase-server in the patch passed. | | | | 265m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1900 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c986f5a8a36c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fe2ae809d1 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/testReport/ | | Max. process+thread count | 3705 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1900/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] HorizonNet commented on a change in pull request #2016: HBASE-24653 Show snapshot owner on Master WebUI
HorizonNet commented on a change in pull request #2016: URL: https://github.com/apache/hbase/pull/2016#discussion_r450037665 ## File path: hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon ## @@ -673,6 +675,8 @@ AssignmentManager assignmentManager = master.getAssignmentManager(); <% snapshotTable.getNameAsString() %> <% new Date(snapshotDesc.getCreationTime()) %> +<% snapshotDesc.getOwner() %> +<% snapshotDesc.getTtl() %> Review comment: I think human readable would be better. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24671) Add excludefile and designatedfile options to graceful_stop.sh
[ https://issues.apache.org/jira/browse/HBASE-24671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151844#comment-17151844 ] Anoop Sam John commented on HBASE-24671: Can u pls add details on how a user can pass these files while calling graceful_stop.sh? That will complete the ReleaseNotes > Add excludefile and designatedfile options to graceful_stop.sh > -- > > Key: HBASE-24671 > URL: https://issues.apache.org/jira/browse/HBASE-24671 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > RegionMover is support excludefile and designatedfile options now. Integrate > these two options into graceful_stop.sh. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24671) Add excludefile and designatedfile options to graceful_stop.sh
[ https://issues.apache.org/jira/browse/HBASE-24671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-24671: -- Release Note: Add excludefile and designatedfile options to graceful_stop.sh. Designated file with per line as unload targets. Exclude file should have per line. We do not unload regions to hostnames given in exclude file. Here is a simple example using graceful_stop.sh with designatedfile option: ./bin/graceful_stop.sh --maxthreads 4 --designatedfile /path/designatedfile hostname The usage of the excludefile option is the same as the above. If excludefile and designedfile are used at the same time. First filter out the list of RSs based on the designedfile, and then exclude the RS contained in the excludefile from the RSs. Finally, the rest of the RSs are the targets of unload. was: Add excludefile and designatedfile options to graceful_stop.sh. Designated file with per line as unload targets. Exclude file should have per line. We do not unload regions to hostnames given in exclude file. > Add excludefile and designatedfile options to graceful_stop.sh > -- > > Key: HBASE-24671 > URL: https://issues.apache.org/jira/browse/HBASE-24671 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > RegionMover is support excludefile and designatedfile options now. Integrate > these two options into graceful_stop.sh. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24671) Add excludefile and designatedfile options to graceful_stop.sh
[ https://issues.apache.org/jira/browse/HBASE-24671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Baiqiang Zhao updated HBASE-24671: -- Release Note: Add excludefile and designatedfile options to graceful_stop.sh. Designated file with per line as unload targets. Exclude file should have per line. We do not unload regions to hostnames given in exclude file. Here is a simple example using graceful_stop.sh with designatedfile option: ./bin/graceful_stop.sh --maxthreads 4 --designatedfile /path/designatedfile hostname The usage of the excludefile option is the same as the above. was: Add excludefile and designatedfile options to graceful_stop.sh. Designated file with per line as unload targets. Exclude file should have per line. We do not unload regions to hostnames given in exclude file. Here is a simple example using graceful_stop.sh with designatedfile option: ./bin/graceful_stop.sh --maxthreads 4 --designatedfile /path/designatedfile hostname The usage of the excludefile option is the same as the above. If excludefile and designedfile are used at the same time. First filter out the list of RSs based on the designedfile, and then exclude the RS contained in the excludefile from the RSs. Finally, the rest of the RSs are the targets of unload. > Add excludefile and designatedfile options to graceful_stop.sh > -- > > Key: HBASE-24671 > URL: https://issues.apache.org/jira/browse/HBASE-24671 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > RegionMover is support excludefile and designatedfile options now. Integrate > these two options into graceful_stop.sh. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24671) Add excludefile and designatedfile options to graceful_stop.sh
[ https://issues.apache.org/jira/browse/HBASE-24671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151876#comment-17151876 ] Baiqiang Zhao commented on HBASE-24671: --- Please check the new Release Notes [~anoop.hbase] > Add excludefile and designatedfile options to graceful_stop.sh > -- > > Key: HBASE-24671 > URL: https://issues.apache.org/jira/browse/HBASE-24671 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > RegionMover is support excludefile and designatedfile options now. Integrate > these two options into graceful_stop.sh. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ddupg opened a new pull request #2025: HBASE-24489 Rewrite TestClusterRestartFailover.test since namespace t…
ddupg opened a new pull request #2025: URL: https://github.com/apache/hbase/pull/2025 …able is gone on on master This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2025: HBASE-24489 Rewrite TestClusterRestartFailover.test since namespace t…
Apache-HBase commented on pull request #2025: URL: https://github.com/apache/hbase/pull/2025#issuecomment-654130992 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 14s | master passed | | +1 :green_heart: | checkstyle | 1m 24s | master passed | | +1 :green_heart: | spotbugs | 2m 37s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 12s | the patch passed | | +1 :green_heart: | checkstyle | 1m 13s | hbase-server: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 22s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 39m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2025 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 998c6891a46b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] brfrn169 commented on a change in pull request #1991: HBASE-24650 Change the return types of the new checkAndMutate methods…
brfrn169 commented on a change in pull request #1991: URL: https://github.com/apache/hbase/pull/1991#discussion_r450112215 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java ## @@ -497,7 +497,7 @@ public void run(MultiResponse resp) { "Failed to mutate row: " + Bytes.toStringBinary(mutation.getRow()), ex)); } else { future.complete(respConverter - .apply((Result) multiResp.getResults().get(regionName).result.get(0))); + .apply((RES) multiResp.getResults().get(regionName).result.get(0))); Review comment: Is there any problem here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24683) Add a basic ReplicationServer which only implement ReplicationSink Service
Guanghao Zhang created HBASE-24683: -- Summary: Add a basic ReplicationServer which only implement ReplicationSink Service Key: HBASE-24683 URL: https://issues.apache.org/jira/browse/HBASE-24683 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24684) Fetch ReplicationSink servers list from HMaster instead of ZooKeeper
Guanghao Zhang created HBASE-24684: -- Summary: Fetch ReplicationSink servers list from HMaster instead of ZooKeeper Key: HBASE-24684 URL: https://issues.apache.org/jira/browse/HBASE-24684 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24666) Offload the replication source/sink job to independent Replication Server
[ https://issues.apache.org/jira/browse/HBASE-24666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151929#comment-17151929 ] Guanghao Zhang commented on HBASE-24666: {quote}you don execute any CP hooks as part of compaction? May be you don have such a need ? {quote} Yes. For internal usage, we didn't consider this problem at all.. > Offload the replication source/sink job to independent Replication Server > - > > Key: HBASE-24666 > URL: https://issues.apache.org/jira/browse/HBASE-24666 > Project: HBase > Issue Type: Umbrella >Reporter: Guanghao Zhang >Priority: Major > > The basic idea is add a role "ReplicationServer" to take the replication > source/sink job. HMaster is responsible for scheduling the replication job to > different ReplicationServer. > [link Design > doc|https://docs.google.com/document/d/16kRPVGctFSf__nC3yaVZmAm3GTxIbHefekKC_rMmTw8/edit?usp=sharing] > Suggestions are welcomed. Thanks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ddupg opened a new pull request #2026: HBASE-22738 Fallback to default group to choose RS when there are no …
ddupg opened a new pull request #2026: URL: https://github.com/apache/hbase/pull/2026 …RS in current group backport HBASE-22738 to branch2 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24685) MultiAction and FailureInfo should be removed
Viraj Jasani created HBASE-24685: Summary: MultiAction and FailureInfo should be removed Key: HBASE-24685 URL: https://issues.apache.org/jira/browse/HBASE-24685 Project: HBase Issue Type: Task Reporter: Viraj Jasani Just came across MultiAction and FailureInfo which are IA.Private and not being used anywhere on trunk. Both of them are being used on branch-2 though. We should remove them on trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24685) MultiAction and FailureInfo should be removed
[ https://issues.apache.org/jira/browse/HBASE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HBASE-24685: Assignee: Viraj Jasani > MultiAction and FailureInfo should be removed > - > > Key: HBASE-24685 > URL: https://issues.apache.org/jira/browse/HBASE-24685 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > > Just came across MultiAction and FailureInfo which are IA.Private and not > being used anywhere on trunk. Both of them are being used on branch-2 though. > We should remove them on trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani opened a new pull request #2027: HBASE-24685 : Removing MultiAction and FailureInfo
virajjasani opened a new pull request #2027: URL: https://github.com/apache/hbase/pull/2027 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work started] (HBASE-24685) MultiAction and FailureInfo should be removed
[ https://issues.apache.org/jira/browse/HBASE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24685 started by Viraj Jasani. > MultiAction and FailureInfo should be removed > - > > Key: HBASE-24685 > URL: https://issues.apache.org/jira/browse/HBASE-24685 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > > Just came across MultiAction and FailureInfo which are IA.Private and not > being used anywhere on trunk. Both of them are being used on branch-2 though. > We should remove them on trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2026: HBASE-22738 Fallback to default group to choose RS when there are no …
Apache-HBase commented on pull request #2026: URL: https://github.com/apache/hbase/pull/2026#issuecomment-654156908 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 16s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 58s | branch-2.3 passed | | +1 :green_heart: | compile | 0m 23s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 24s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 21s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 36s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | shadedjars | 5m 25s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 4m 49s | hbase-rsgroup in the patch passed. | | | | 27m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2026 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 01eba63619d5 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/testReport/ | | Max. process+thread count | 2326 (vs. ulimit of 12500) | | modules | C: hbase-rsgroup U: hbase-rsgroup | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2026: HBASE-22738 Fallback to default group to choose RS when there are no …
Apache-HBase commented on pull request #2026: URL: https://github.com/apache/hbase/pull/2026#issuecomment-654158184 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 45s | branch-2.3 passed | | +1 :green_heart: | compile | 0m 28s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 6m 25s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 23s | hbase-rsgroup in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 22s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | shadedjars | 6m 25s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 21s | hbase-rsgroup in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 4m 1s | hbase-rsgroup in the patch passed. | | | | 30m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2026 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2fe1afe53aab 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-rsgroup.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-rsgroup.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/testReport/ | | Max. process+thread count | 2606 (vs. ulimit of 12500) | | modules | C: hbase-rsgroup U: hbase-rsgroup | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2026: HBASE-22738 Fallback to default group to choose RS when there are no …
Apache-HBase commented on pull request #2026: URL: https://github.com/apache/hbase/pull/2026#issuecomment-654159689 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 1s | The patch does not contain any @author tags. | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 34s | branch-2.3 passed | | +1 :green_heart: | checkstyle | 0m 15s | branch-2.3 passed | | +1 :green_heart: | spotbugs | 0m 42s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 12s | the patch passed | | +1 :green_heart: | checkstyle | 0m 13s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 16m 44s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 0m 48s | the patch passed | ||| _ Other Tests _ | | -1 :x: | asflicense | 0m 14s | The patch generated 1 ASF License warnings. | | | | 33m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2026 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 03a040459beb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | asflicense | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/artifact/yetus-general-check/output/patch-asflicense-problems.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-rsgroup U: hbase-rsgroup | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2026/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2027: HBASE-24685 : Removing MultiAction and FailureInfo
Apache-HBase commented on pull request #2027: URL: https://github.com/apache/hbase/pull/2027#issuecomment-654164189 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | compile | 0m 29s | master passed | | +1 :green_heart: | shadedjars | 5m 47s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 28s | hbase-client in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 0s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | shadedjars | 5m 47s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 26s | hbase-client in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 11s | hbase-client in the patch passed. | | | | 24m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2027 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 098b4ddabdf8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/testReport/ | | Max. process+thread count | 272 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2027: HBASE-24685 : Removing MultiAction and FailureInfo
Apache-HBase commented on pull request #2027: URL: https://github.com/apache/hbase/pull/2027#issuecomment-654164352 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 45s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 35s | master passed | | +1 :green_heart: | compile | 0m 41s | master passed | | +1 :green_heart: | shadedjars | 6m 21s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 28s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | shadedjars | 5m 35s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 4s | hbase-client in the patch passed. | | | | 24m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2027 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 37504157889a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/testReport/ | | Max. process+thread count | 343 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2027: HBASE-24685 : Removing MultiAction and FailureInfo
Apache-HBase commented on pull request #2027: URL: https://github.com/apache/hbase/pull/2027#issuecomment-654167129 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 6s | master passed | | +1 :green_heart: | checkstyle | 0m 29s | master passed | | +1 :green_heart: | spotbugs | 1m 1s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | +1 :green_heart: | checkstyle | 0m 27s | hbase-client: The patch generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 26s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 1m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 32m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2027 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 58bd482677b1 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2027/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] wchevreuil commented on a change in pull request #2011: HBASE-24664 Some changing of split region by overall region size rath…
wchevreuil commented on a change in pull request #2011: URL: https://github.com/apache/hbase/pull/2011#discussion_r450139888 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java ## @@ -68,22 +76,14 @@ protected void configureForRegion(HRegion region) { @Override protected boolean shouldSplit() { -boolean foundABigStore = false; - +// If any of the stores is unable to split (eg they contain reference files) +// then don't split for (HStore store : region.getStores()) { - // If any of the stores are unable to split (eg they contain reference files) - // then don't split - if ((!store.canSplit())) { + if (!store.canSplit()) { Review comment: Move this check to the for loops inside _isExceedSize_, so that we don't have to do do an extra iteration for all stores again in case none returns false for _canSplit_. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java ## @@ -94,4 +94,33 @@ long getDesiredMaxFileSize() { public boolean positiveJitterRate() { return this.jitterRate > 0; } + + /** + * @return true if region size exceed the sizeToCheck + */ + protected boolean isExceedSize(long sizeToCheck, String extraLogStr) { +if (overallHregionFiles) { + long sumSize = 0; + for (HStore store : region.getStores()) { +sumSize += store.getSize(); + } + if (sumSize > sizeToCheck) { Review comment: We should just return this comparison and let each caller decide how to log it? That would discard the need for having an extra param just for the sake of logging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani merged pull request #2027: HBASE-24685 : Removing MultiAction and FailureInfo
virajjasani merged pull request #2027: URL: https://github.com/apache/hbase/pull/2027 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24685) MultiAction and FailureInfo should be removed
[ https://issues.apache.org/jira/browse/HBASE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved HBASE-24685. -- Fix Version/s: 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed > MultiAction and FailureInfo should be removed > - > > Key: HBASE-24685 > URL: https://issues.apache.org/jira/browse/HBASE-24685 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Fix For: 3.0.0-alpha-1 > > > Just came across MultiAction and FailureInfo which are IA.Private and not > being used anywhere on trunk. Both of them are being used on branch-2 though. > We should remove them on trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151973#comment-17151973 ] Hudson commented on HBASE-24546: Results for branch master [build #1778 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1663//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24671) Add excludefile and designatedfile options to graceful_stop.sh
[ https://issues.apache.org/jira/browse/HBASE-24671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151971#comment-17151971 ] Hudson commented on HBASE-24671: Results for branch master [build #1778 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1663//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add excludefile and designatedfile options to graceful_stop.sh > -- > > Key: HBASE-24671 > URL: https://issues.apache.org/jira/browse/HBASE-24671 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.4.0 >Reporter: Baiqiang Zhao >Assignee: Baiqiang Zhao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > RegionMover is support excludefile and designatedfile options now. Integrate > these two options into graceful_stop.sh. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24578) [WAL] Add a parameter to config RingBufferEventHandler's SyncFuture count
[ https://issues.apache.org/jira/browse/HBASE-24578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151972#comment-17151972 ] Hudson commented on HBASE-24578: Results for branch master [build #1778 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1663//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/master/1778/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > [WAL] Add a parameter to config RingBufferEventHandler's SyncFuture count > - > > Key: HBASE-24578 > URL: https://issues.apache.org/jira/browse/HBASE-24578 > Project: HBase > Issue Type: Improvement > Components: wal >Affects Versions: 1.4.13, 2.2.5 >Reporter: Reid Chan >Assignee: wenfeiyi666 >Priority: Major > > The current value of RingBufferEventHandler's handler is the value of > {{hbase.regionserver.handler.count}}, which works good in default wal > provider --- one WAL per regionserver. > When trying to use WAL group provider, either by group or wal per region, the > default value is bad. If rs has 100 regions and wal per region strategy is > used, then rs will allocate 100 * > SyncFuture[$hbase.regionserver.handler.count] array > {code} > int maxHandlersCount = conf.getInt(HConstants.REGION_SERVER_HANDLER_COUNT, > 200); > this.ringBufferEventHandler = new RingBufferEventHandler( > conf.getInt("hbase.regionserver.hlog.syncer.count", 5), > maxHandlersCount); > ... > > RingBufferEventHandler(final int syncRunnerCount, final int maxHandlersCount) > { > this.syncFutures = new SyncFuture[maxHandlersCount]; > ... > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24686) [LOG] Log improvement in Connection#close
mokai created HBASE-24686: - Summary: [LOG] Log improvement in Connection#close Key: HBASE-24686 URL: https://issues.apache.org/jira/browse/HBASE-24686 Project: HBase Issue Type: Improvement Components: Client, logging Affects Versions: 2.2.3 Reporter: mokai We met some customers used hbase connection improperly, some threads call failed since the shared connection closed by one of the threads. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24686) [LOG] Log improvement in Connection#close
[ https://issues.apache.org/jira/browse/HBASE-24686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mokai updated HBASE-24686: -- Description: We met some customers used hbase connection improperly, some threads call failed since the shared connection closed by one of the threads. It's better to print the details when connection closing. was:We met some customers used hbase connection improperly, some threads call failed since the shared connection closed by one of the threads. > [LOG] Log improvement in Connection#close > - > > Key: HBASE-24686 > URL: https://issues.apache.org/jira/browse/HBASE-24686 > Project: HBase > Issue Type: Improvement > Components: Client, logging >Affects Versions: 2.2.3 >Reporter: mokai >Priority: Major > > We met some customers used hbase connection improperly, some threads call > failed since the shared connection closed by one of the threads. > It's better to print the details when connection closing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani commented on pull request #1926: HBASE-24586 Add table level locality in table.jsp
virajjasani commented on pull request #1926: URL: https://github.com/apache/hbase/pull/1926#issuecomment-654214933 @bsglz Thanks for the reminder. Will take a look in some time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #1909: HBASE-24569 Get hostAndWeights in addition using localhost if it is n…
Apache9 commented on pull request #1909: URL: https://github.com/apache/hbase/pull/1909#issuecomment-654218536 > > I do not fully understand the logic here, why it is OK to use localhost if the returned hostAndWeight is null? We will only use the related methods to get hostAndWeight for the local machine? > > Good question, the input host might be other machine in distributed mode, but in that case the hostAndWeights will not use localhost as host name(get from BlockLocation.hosts), so it is ok. I do not get your point... ``` private float getBlockLocalityIndexInternal(String host, Visitor visitor) { float localityIndex = 0; HostAndWeight hostAndWeight = this.hostAndWeights.get(host); if (hostAndWeight == null) { hostAndWeight = this.hostAndWeights.get(HConstants.LOCALHOST); } if (hostAndWeight != null && uniqueBlocksTotalWeight != 0) { localityIndex = visitor.visit(hostAndWeight); } return localityIndex; } ``` The modified code is like this, no matter what is the host passed in, you will always use localhost to get the hostAndWeight and again? What do you mean by 'but in that case the hostAndWeights will not use localhost as host name'? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 edited a comment on pull request #1909: HBASE-24569 Get hostAndWeights in addition using localhost if it is n…
Apache9 edited a comment on pull request #1909: URL: https://github.com/apache/hbase/pull/1909#issuecomment-654218536 > > I do not fully understand the logic here, why it is OK to use localhost if the returned hostAndWeight is null? We will only use the related methods to get hostAndWeight for the local machine? > > Good question, the input host might be other machine in distributed mode, but in that case the hostAndWeights will not use localhost as host name(get from BlockLocation.hosts), so it is ok. I do not get your point... ``` private float getBlockLocalityIndexInternal(String host, Visitor visitor) { float localityIndex = 0; HostAndWeight hostAndWeight = this.hostAndWeights.get(host); if (hostAndWeight == null) { hostAndWeight = this.hostAndWeights.get(HConstants.LOCALHOST); } if (hostAndWeight != null && uniqueBlocksTotalWeight != 0) { localityIndex = visitor.visit(hostAndWeight); } return localityIndex; } ``` The modified code is like this, no matter what is the host passed in, you will always use localhost to get the hostAndWeight again if the first get returns null? What do you mean by 'but in that case the hostAndWeights will not use localhost as host name'? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24663) Add procedure process time statistics UI
[ https://issues.apache.org/jira/browse/HBASE-24663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152002#comment-17152002 ] Junhong Xu commented on HBASE-24663: May I take this issue, sir? [~zghao] > Add procedure process time statistics UI > > > Key: HBASE-24663 > URL: https://issues.apache.org/jira/browse/HBASE-24663 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Priority: Major > > Added in "Procedures & Locks" jsp. > For the first version UI, we care about the process time of > ServerCrashProcedure, TRSP, OpenRegionProcedure and CloseRegionProcedure. > Plan to show the avg/P50/P90/min/max process time of these procedures. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2025: HBASE-24489 Rewrite TestClusterRestartFailover.test since namespace t…
Apache-HBase commented on pull request #2025: URL: https://github.com/apache/hbase/pull/2025#issuecomment-654230489 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 1s | master passed | | +1 :green_heart: | compile | 1m 18s | master passed | | +1 :green_heart: | shadedjars | 6m 36s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 45s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 43s | the patch passed | | +1 :green_heart: | compile | 1m 15s | the patch passed | | +1 :green_heart: | javac | 1m 15s | the patch passed | | +1 :green_heart: | shadedjars | 6m 48s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 50s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 220m 49s | hbase-server in the patch passed. | | | | 251m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2025 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 38fd49ed1b6a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/testReport/ | | Max. process+thread count | 2716 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2025: HBASE-24489 Rewrite TestClusterRestartFailover.test since namespace t…
Apache-HBase commented on pull request #2025: URL: https://github.com/apache/hbase/pull/2025#issuecomment-654236150 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 15s | master passed | | +1 :green_heart: | compile | 0m 59s | master passed | | +1 :green_heart: | shadedjars | 6m 16s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 56s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | shadedjars | 6m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 232m 44s | hbase-server in the patch passed. | | | | 260m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2025 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7629df3c7b48 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 287f29818f | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/testReport/ | | Max. process+thread count | 2612 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2025/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24376) MergeNormalizer is merging non-adjacent regions and causing region overlaps/holes.
[ https://issues.apache.org/jira/browse/HBASE-24376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152065#comment-17152065 ] Ruslan Sabitov commented on HBASE-24376: I faced with the issue in branch-1 (HBase 1.2.0-cdh5.16.2): 2020-07-02 16:41:18,337 INFO org.apache.hadoop.hbase.master.normalizer.MergeNormalizationPlan: Executing merging normalization plan: MergeNormalizationPlan{firstRegion={ENC ODED => feba09265266f1c3c090bc42cc90becc, NAME => 'tableName,Aw-BEZ0JD4M3HrvA4Yks,1593695478557.feba09265266f1c3c090bc42cc90becc.', STARTKEY => 'Aw-BEZ0JD4M3HrvA4Yks', EN DKEY => 'D0sYMT716R0tyHPGk8ii'}, secondRegion={ENCODED => 8003ecbf849c4f5e27bf5956ec0729cc, NAME => 'TableName,B_zjT044PCvwQ4I53Q5m,1593695479990.8003ecbf849c4f5e27bf5956 ec0729cc.', STARTKEY => 'B_zjT044PCvwQ4I53Q5m', ENDKEY => 'FCzjFZ4Vhb0hpVtn6VxP'}} > MergeNormalizer is merging non-adjacent regions and causing region > overlaps/holes. > -- > > Key: HBASE-24376 > URL: https://issues.apache.org/jira/browse/HBASE-24376 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0 > > > Currently, we found normalizer was merging regions which are non-adjacent, it > will cause inconsistencies in the cluster. > {code:java} > 439055 2020-05-08 17:47:09,814 INFO > org.apache.hadoop.hbase.master.normalizer.MergeNormalizationPlan: Executing > merging normalization plan: MergeNormalizationPlan{firstRegion={ENCODED => > 47fe236a5e3649ded95cb64ad0c08492, NAME => > 'TABLE,\x03\x01\x05\x01\x04\x02,1554838974870.47fe236a5e3649ded95cb64ad > 0c08492.', STARTKEY => '\x03\x01\x05\x01\x04\x02', ENDKEY => > '\x03\x01\x05\x01\x04\x02\x01\x02\x02201904082200\x00\x00\x03Mac\x00\x00\x00\x00\x00\x00\x00\x00\x00iMac13,1\x00\x00\x00\x00\x00\x049.3-14E260\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x05'}, > secondRegion={ENCODED => 0c0f2aa67f4329d5c4 8ba0320f173d31, NAME => > 'TABLE,\x03\x01\x05\x02\x01\x01,1554830735526.0c0f2aa67f4329d5c48ba0320f173d31.', > STARTKEY => '\x03\x01\x05\x02\x01\x01', ENDKEY => > '\x03\x01\x05\x02\x01\x02'}} > 439056 2020-05-08 17:47:11,438 INFO org.apache.hadoop.hbase.ScheduledChore: > CatalogJanitor-*:16000 average execution time: 1676219193 ns. > 439057 2020-05-08 17:47:11,730 INFO org.apache.hadoop.hbase.master.HMaster: > Client=null/null merge regions [47fe236a5e3649ded95cb64ad0c08492], > [0c0f2aa67f4329d5c48ba0320f173d31] > {code} > > The root cause is that getMergeNormalizationPlan() uses a list of regionInfo > which is ordered by regionName. regionName does not necessary guarantee the > order of STARTKEY (let's say 'aa1', 'aa1!', in order of regionName, it will > be 'aa1!' followed by 'aa1'. This will result in normalizer merging > non-adjacent regions into one and creates overlaps. This is not an issue in > branch-1 as the list is already ordered by RegionInfo.COMPARATOR in > normalizer. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152079#comment-17152079 ] Hudson commented on HBASE-24546: Results for branch branch-2.3 [build #173 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/173/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/173/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/173/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/173/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/173/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-11288) Splittable Meta
[ https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152090#comment-17152090 ] Duo Zhang commented on HBASE-11288: --- Any updates here? Thanks. > Splittable Meta > --- > > Key: HBASE-11288 > URL: https://issues.apache.org/jira/browse/HBASE-11288 > Project: HBase > Issue Type: Umbrella > Components: meta >Reporter: Francis Christopher Liu >Assignee: Francis Christopher Liu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] wchevreuil commented on a change in pull request #2009: HBASE-21596 Delete for a specific cell version can bring back version…
wchevreuil commented on a change in pull request #2009: URL: https://github.com/apache/hbase/pull/2009#discussion_r450329196 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java ## @@ -3170,37 +3171,87 @@ public void prepareDeleteTimestamps(Mutation mutation, Map> f count = kvCount.get(qual); Get get = new Get(CellUtil.cloneRow(cell)); - get.readVersions(count); - get.addColumn(family, qual); + get.readVersions(Integer.MAX_VALUE); if (coprocessorHost != null) { if (!coprocessorHost.prePrepareTimeStampForDeleteVersion(mutation, cell, byteNow, get)) { - updateDeleteLatestVersionTimestamp(cell, get, count, byteNow); + updateDeleteLatestVersionTimestamp(cell, get, count, + this.htableDescriptor.getColumnFamily(family).getMaxVersions(), +byteNow, deleteCells); + } } else { -updateDeleteLatestVersionTimestamp(cell, get, count, byteNow); +updateDeleteLatestVersionTimestamp(cell, get, count, +this.htableDescriptor.getColumnFamily(family).getMaxVersions(), + byteNow, deleteCells); } } else { PrivateCellUtil.updateLatestStamp(cell, byteNow); + deleteCells.add(cell); } } + e.setValue(deleteCells); } } - void updateDeleteLatestVersionTimestamp(Cell cell, Get get, int count, byte[] byteNow) - throws IOException { -List result = get(get, false); - + private void updateDeleteLatestVersionTimestamp(Cell cell, Get get, int count, int maxVersions, + byte[] byteNow, List deleteCells) throws IOException { +List result = new ArrayList<>(deleteCells); +Scan scan = new Scan(get); +scan.setRaw(true); +this.getScanner(scan).next(result); +List cells = new ArrayList<>(); if (result.size() < count) { // Nothing to delete PrivateCellUtil.updateLatestStamp(cell, byteNow); - return; -} -if (result.size() > count) { - throw new RuntimeException("Unexpected size: " + result.size()); + cells.add(cell); + deleteCells.addAll(cells); +} else if (result.size() > count) { + int currentVersion = 0; + long latestCellTS = Long.MAX_VALUE; + result.sort((cell1, cell2) -> { +if(cell1.getTimestamp()>cell2.getTimestamp()){ + return -1; +} else if(cell1.getTimestamp()= maxVersions) { +Cell tempCell = null; +try { + tempCell = PrivateCellUtil.deepClone(cell); +} catch (CloneNotSupportedException e) { + throw new IOException(e); +} +PrivateCellUtil.setTimestamp(tempCell, getCell.getTimestamp()); +cells.add(tempCell); + } else if (currentVersion == 0) { +PrivateCellUtil.setTimestamp(cell, getCell.getTimestamp()); +cells.add(cell); + } + currentVersion++; +} +latestCellTS = getCell.getTimestamp(); + } + +} else { + Cell getCell = result.get(0); Review comment: It's not needed, because we don't have to worry about additional versions, we only need to put a single marker for current TS. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on pull request #1986: HBASE-24581 Skip compaction request/check for replica regions at the …
huaxiangsun commented on pull request #1986: URL: https://github.com/apache/hbase/pull/1986#issuecomment-654339877 @busbey @infraio Ping for comments, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24635) Split TestMetaWithReplicas
[ https://issues.apache.org/jira/browse/HBASE-24635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24635: - Fix Version/s: (was: 2.4.0) (was: 2.3.1) 2.3.0 > Split TestMetaWithReplicas > -- > > Key: HBASE-24635 > URL: https://issues.apache.org/jira/browse/HBASE-24635 > Project: HBase > Issue Type: Task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > It will stop and then start a mini cluster every time after each test method, > so let's just split them into individual test files. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] shahrs87 commented on pull request #1962: HBASE-24615 MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket.
shahrs87 commented on pull request #1962: URL: https://github.com/apache/hbase/pull/1962#issuecomment-654348836 @WenFeiYi Thank you for the PR. Mind writing a small test case for this. Overall the code looks good to me. Thank you ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24615) MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket.
[ https://issues.apache.org/jira/browse/HBASE-24615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152157#comment-17152157 ] Rushabh Shah commented on HBASE-24615: -- [~wenfeiyi666] just fyi you dont have to create a separate PR for branch-2. You just need to have a PR for master branch and the committer will try to backport to all the other branches. If the rebase work is more then he/she will let you know to create another PR for those branch. Thank you ! > MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the > distribution for last bucket. > > > Key: HBASE-24615 > URL: https://issues.apache.org/jira/browse/HBASE-24615 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 2.3.0, master, 1.3.7, 2.2.6 >Reporter: Rushabh Shah >Assignee: wenfeiyi666 >Priority: Major > > We are not processing the distribution for last bucket. > https://github.com/apache/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java#L70 > {code:java} > public void updateSnapshotRangeMetrics(MetricsRecordBuilder > metricsRecordBuilder, > Snapshot snapshot) { > long priorRange = 0; > long cumNum = 0; > final long[] ranges = getRanges(); > final String rangeType = getRangeType(); > for (int i = 0; i < ranges.length - 1; i++) { -> The bug lies > here. We are not processing last bucket. > long val = snapshot.getCountAtOrBelow(ranges[i]); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + priorRange + "-" + > ranges[i], desc), > val - cumNum); > } > priorRange = ranges[i]; > cumNum = val; > } > long val = snapshot.getCount(); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - > 1] + "-inf", desc), > val - cumNum); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on a change in pull request #2018: HBASE-24659 Calculate FIXED_OVERHEAD automatically
saintstack commented on a change in pull request #2018: URL: https://github.com/apache/hbase/pull/2018#discussion_r449899043 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java ## @@ -8405,12 +8405,7 @@ private static long getLongValue(final Cell cell) throws DoNotRetryIOException { return cells; } - public static final long FIXED_OVERHEAD = ClassSize.align( - ClassSize.OBJECT + - 56 * ClassSize.REFERENCE + - 3 * Bytes.SIZEOF_INT + - 14 * Bytes.SIZEOF_LONG + - 3 * Bytes.SIZEOF_BOOLEAN); + public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HRegion.class, false); Review comment: Does ClassSize come up w/ same general numbers as old manual technique. It does deep size rather than shallow? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk opened a new pull request #2028: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch-2.3
ndimiduk opened a new pull request #2028: URL: https://github.com/apache/hbase/pull/2028 Signed-off-by: Duo Zhang This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reopened HBASE-24625: -- Reopening to apply patch to branch-2.3 > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24625: - Fix Version/s: (was: 2.4.0) (was: 2.3.1) 2.3.0 > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on pull request #2028: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch-2.3
ndimiduk commented on pull request #2028: URL: https://github.com/apache/hbase/pull/2028#issuecomment-654356936 Patch from branch-2 applies cleanly to branch-2.3. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #2017: HBASE-24669 Logging of ppid should be consistent across all occurrences
ndimiduk commented on a change in pull request #2017: URL: https://github.com/apache/hbase/pull/2017#discussion_r450379754 ## File path: hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java ## @@ -1842,7 +1842,7 @@ private void countDownChildren(RootProcedureState procStack, store.update(parent); scheduler.addFront(parent); LOG.info("Finished subprocedure pid={}, resume processing parent {}", - procedure.getProcId(), parent); + procedure.getProcId(), parent.toString().replace("pid=","ppid=")); Review comment: Oh, I see. Good find. How about `"Finished subprocedure (pid={}), resume processing of parent (ppid={})`, and use `parent.getProcId()` instead of the string replace. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2028: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch-2.3
Apache-HBase commented on pull request #2028: URL: https://github.com/apache/hbase/pull/2028#issuecomment-654373837 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 44s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 17s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 50s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | +1 :green_heart: | javac | 1m 13s | the patch passed | | +1 :green_heart: | shadedjars | 4m 57s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 46s | hbase-asyncfs in the patch passed. | | -1 :x: | unit | 7m 52s | hbase-server in the patch failed. | | | | 33m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2028 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f0de9667dbd2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/testReport/ | | Max. process+thread count | 767 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2028: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch-2.3
Apache-HBase commented on pull request #2028: URL: https://github.com/apache/hbase/pull/2028#issuecomment-654378957 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 55s | branch-2.3 passed | | +1 :green_heart: | checkstyle | 1m 30s | branch-2.3 passed | | +1 :green_heart: | spotbugs | 2m 37s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | checkstyle | 0m 11s | The patch passed checkstyle in hbase-asyncfs | | +1 :green_heart: | checkstyle | 1m 14s | hbase-server: The patch generated 0 new + 44 unchanged - 3 fixed = 44 total (was 47) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 41s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 21s | The patch does not generate ASF License warnings. | | | | 45m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2028 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux b5d4ed50dfbb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] timoha commented on pull request #1826: HBASE-24438 Don't update TaskMonitor when deserializing ServerCrashProcedure
timoha commented on pull request #1826: URL: https://github.com/apache/hbase/pull/1826#issuecomment-654381936 Looks like operator intervention is needed for this issue :) > I for one do not look at TaskMonitor figuring state of Procedures. Do others? Thanks. Just from my perspective, I find it useful to see the procedure progress as looking plainly at procedure list isn't as helpful. In ideal world, I wouldn't need this information at all, as it would just do its job (and would only show something when it's broken). However, since this task exists, it should not have false positives. To make it clear, I'm against "improving" this side-effect as I wouldn't find it helpful to me as operator (I just don't care that something is de-serializing), that was just a suggestion that I now regret bringing up. I'm ok with closing this PR if you decide to go that way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24376) MergeNormalizer is merging non-adjacent regions and causing region overlaps/holes.
[ https://issues.apache.org/jira/browse/HBASE-24376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24376: - Fix Version/s: (was: 2.4.0) > MergeNormalizer is merging non-adjacent regions and causing region > overlaps/holes. > -- > > Key: HBASE-24376 > URL: https://issues.apache.org/jira/browse/HBASE-24376 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Currently, we found normalizer was merging regions which are non-adjacent, it > will cause inconsistencies in the cluster. > {code:java} > 439055 2020-05-08 17:47:09,814 INFO > org.apache.hadoop.hbase.master.normalizer.MergeNormalizationPlan: Executing > merging normalization plan: MergeNormalizationPlan{firstRegion={ENCODED => > 47fe236a5e3649ded95cb64ad0c08492, NAME => > 'TABLE,\x03\x01\x05\x01\x04\x02,1554838974870.47fe236a5e3649ded95cb64ad > 0c08492.', STARTKEY => '\x03\x01\x05\x01\x04\x02', ENDKEY => > '\x03\x01\x05\x01\x04\x02\x01\x02\x02201904082200\x00\x00\x03Mac\x00\x00\x00\x00\x00\x00\x00\x00\x00iMac13,1\x00\x00\x00\x00\x00\x049.3-14E260\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x05'}, > secondRegion={ENCODED => 0c0f2aa67f4329d5c4 8ba0320f173d31, NAME => > 'TABLE,\x03\x01\x05\x02\x01\x01,1554830735526.0c0f2aa67f4329d5c48ba0320f173d31.', > STARTKEY => '\x03\x01\x05\x02\x01\x01', ENDKEY => > '\x03\x01\x05\x02\x01\x02'}} > 439056 2020-05-08 17:47:11,438 INFO org.apache.hadoop.hbase.ScheduledChore: > CatalogJanitor-*:16000 average execution time: 1676219193 ns. > 439057 2020-05-08 17:47:11,730 INFO org.apache.hadoop.hbase.master.HMaster: > Client=null/null merge regions [47fe236a5e3649ded95cb64ad0c08492], > [0c0f2aa67f4329d5c48ba0320f173d31] > {code} > > The root cause is that getMergeNormalizationPlan() uses a list of regionInfo > which is ordered by regionName. regionName does not necessary guarantee the > order of STARTKEY (let's say 'aa1', 'aa1!', in order of regionName, it will > be 'aa1!' followed by 'aa1'. This will result in normalizer merging > non-adjacent regions into one and creates overlaps. This is not an issue in > branch-1 as the list is already ordered by RegionInfo.COMPARATOR in > normalizer. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on pull request #2006: HBASE-24632 Enable procedure-based log splitting as default in hbase3
saintstack commented on pull request #2006: URL: https://github.com/apache/hbase/pull/2006#issuecomment-654389336 Test faillures are because we try to delete a non-empty directory. The left-over WALs are meta WALs but for meta regions that have since moved to another server (after a successful close); i.e. they WALs are no longer needed... They are for archive. The old zk-based WAL splitter specifically handled this case archiving remaining meta files if the crashed server was NOT carrying meta. Added this special handling to the new procedure-based WAL split which was missing it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24546: - Fix Version/s: (was: 2.3.1) 1.3.0 > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 1.3.0, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24665) all wal of RegionGroupingProvider together roll
[ https://issues.apache.org/jira/browse/HBASE-24665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24665: - Fix Version/s: (was: 2.3.0) 2.3.1 > all wal of RegionGroupingProvider together roll > --- > > Key: HBASE-24665 > URL: https://issues.apache.org/jira/browse/HBASE-24665 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.0, master, 2.1.10, 1.4.14, 2.2.6 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.1.10, 1.4.14, 2.2.7 > > > when use RegionGroupingProvider, any a wal request roll, all wal will be > together roll. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2006: HBASE-24632 Enable procedure-based log splitting as default in hbase3
Apache-HBase commented on pull request #2006: URL: https://github.com/apache/hbase/pull/2006#issuecomment-654404214 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 34s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 44s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | -0 :warning: | checkstyle | 1m 3s | hbase-server: The patch generated 1 new + 22 unchanged - 4 fixed = 23 total (was 26) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 20s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 39s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 39m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2006 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 05bf0e4a77e2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 5416cef27f | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152250#comment-17152250 ] Michael Stack commented on HBASE-24625: --- I'd have reopened it because it is failing branch-2. See [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2/lastSuccessfulBuild/artifact/dashboard.html] See the bottom half of the screen where replication.regionserver.TestWALEntryStream fails since #6494. Here is what happens when I try test locally: {code:java} [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream.null [ERROR] Run 1: TestWALEntryStream.testReplicationSourceWALReaderRecovered:442 » TestTimedOut ... [ERROR] Run 2: TestWALEntryStream » Appears to be stuck in thread AsyncFSWAL-1-1 [INFO] [ERROR] TestWALEntryStream.testReplicationSourceWALReaderRecovered:442 » Interrupted [INFO] [ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0 {code} Will try and take a look later... > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
ndimiduk commented on a change in pull request #1970: URL: https://github.com/apache/hbase/pull/1970#discussion_r450430879 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java ## @@ -46,6 +46,10 @@ protected FSDataOutputStream output; + private volatile long syncedLength = 0; Review comment: nit: why do we have `AtomicUtils.updateMax`? It seems [`getAndAccumulate`](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicLong.html#getAndAccumulate-long-java.util.function.LongBinaryOperator-) is designed for this use case, i.e., `syncedLength.getAndAccumulate(fsdos.getPos(), Math::max)` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
ndimiduk commented on a change in pull request #1970: URL: https://github.com/apache/hbase/pull/1970#discussion_r450435239 ## File path: hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/WrapperAsyncFSOutput.java ## @@ -91,7 +93,11 @@ private void flush0(CompletableFuture future, ByteArrayOutputStream buffer out.hflush(); } } - future.complete(out.getPos()); + long pos = out.getPos(); + if(pos > this.syncedLength) { +this.syncedLength = pos; Review comment: This read-followedby-update also needs to be atomic, yes? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2028: Backport "HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length. (#1970)" to branch-2.3
Apache-HBase commented on pull request #2028: URL: https://github.com/apache/hbase/pull/2028#issuecomment-654432059 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 21s | branch-2.3 passed | | +1 :green_heart: | compile | 1m 27s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 6m 1s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-asyncfs in branch-2.3 failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 5s | the patch passed | | +1 :green_heart: | compile | 1m 28s | the patch passed | | +1 :green_heart: | javac | 1m 28s | the patch passed | | +1 :green_heart: | shadedjars | 6m 16s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 20s | hbase-asyncfs in the patch failed. | | -0 :warning: | javadoc | 0m 52s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 39s | hbase-asyncfs in the patch passed. | | -1 :x: | unit | 132m 5s | hbase-server in the patch failed. | | | | 163m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2028 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 94c781e66973 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 5d5b156ec3 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/testReport/ | | Max. process+thread count | 3755 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2028/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152282#comment-17152282 ] Nick Dimiduk commented on HBASE-24625: -- I'm seeing this on the 2.3 backport PR as well, https://github.com/apache/hbase/pull/2028 > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2006: HBASE-24632 Enable procedure-based log splitting as default in hbase3
Apache-HBase commented on pull request #2006: URL: https://github.com/apache/hbase/pull/2006#issuecomment-654457327 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 42s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 57s | branch-2 passed | | +1 :green_heart: | compile | 1m 28s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 46s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-common in branch-2 failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 53s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 5m 45s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 18s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 43s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 38s | hbase-common in the patch passed. | | -1 :x: | unit | 132m 24s | hbase-server in the patch failed. | | | | 162m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2006 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b8186d49463e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 5416cef27f | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/testReport/ | | Max. process+thread count | 4159 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2006: HBASE-24632 Enable procedure-based log splitting as default in hbase3
Apache-HBase commented on pull request #2006: URL: https://github.com/apache/hbase/pull/2006#issuecomment-654458964 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | branch-2 passed | | +1 :green_heart: | compile | 1m 21s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 16s | the patch passed | | +1 :green_heart: | compile | 1m 19s | the patch passed | | +1 :green_heart: | javac | 1m 19s | the patch passed | | +1 :green_heart: | shadedjars | 4m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 21s | hbase-common in the patch passed. | | -1 :x: | unit | 139m 44s | hbase-server in the patch failed. | | | | 166m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2006 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 30ce71f34804 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 5416cef27f | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/testReport/ | | Max. process+thread count | 4361 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2006/5/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152321#comment-17152321 ] Nick Dimiduk commented on HBASE-24625: -- Running {{TestWALEntryStream}} on the branch-2.3 backport, I get {noformat} 2020-07-06 12:59:42,360 DEBUG [Thread-184] regionserver.WALEntryStream(252): Reached the end of log hdfs://localhost:56832/Users/ndimiduk/repos/apache/hbase/hbase-server/target/test-data/d93766f2-8459-c5b2-fc20-bea78d16ff02/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1594065581848 Exception in thread "Thread-184" java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.getSyncedLength(AsyncProtobufLogWriter.java:237) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.getLogFileSizeIfBeingWritten(AbstractFSWAL.java:1064) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:265) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:189) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:101) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:195) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:138) {noformat} > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24625: - Fix Version/s: (was: 2.3.0) 2.3.1 > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24546: - Fix Version/s: (was: 1.3.0) 2.3.0 > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24687) New Connection being created for each table
Manas created HBASE-24687: - Summary: New Connection being created for each table Key: HBASE-24687 URL: https://issues.apache.org/jira/browse/HBASE-24687 Project: HBase Issue Type: Bug Components: mob Affects Versions: 2.2.3 Reporter: Manas Attachments: Screen Shot 2020-07-06 at 6.06.43 PM.png Currently creating a new connection for every table under MobFileCleanerChore.java where we should theoretically just using the connection from HBase masterservices. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24688) AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META
Huaxiang Sun created HBASE-24688: Summary: AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META Key: HBASE-24688 URL: https://issues.apache.org/jira/browse/HBASE-24688 Project: HBase Issue Type: Bug Reporter: Huaxiang Sun This results in openMetaRegion always be executed in closeMetaExecutor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24688) AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META
[ https://issues.apache.org/jira/browse/HBASE-24688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun reassigned HBASE-24688: Assignee: Huaxiang Sun > AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of > EventType.M_RS_OPEN_META > -- > > Key: HBASE-24688 > URL: https://issues.apache.org/jira/browse/HBASE-24688 > Project: HBase > Issue Type: Bug >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > This results in openMetaRegion always be executed in closeMetaExecutor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24688) AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META for meta region
[ https://issues.apache.org/jira/browse/HBASE-24688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun updated HBASE-24688: - Summary: AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META for meta region (was: AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of EventType.M_RS_OPEN_META) > AssignRegionHandler uses EventType.M_RS_CLOSE_META instead of > EventType.M_RS_OPEN_META for meta region > -- > > Key: HBASE-24688 > URL: https://issues.apache.org/jira/browse/HBASE-24688 > Project: HBase > Issue Type: Bug >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > This results in openMetaRegion always be executed in closeMetaExecutor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] huaxiangsun opened a new pull request #2029: HBASE-24688 AssignRegionHandler uses EventType.M_RS_CLOSE_META instea…
huaxiangsun opened a new pull request #2029: URL: https://github.com/apache/hbase/pull/2029 …d of EventType.M_RS_OPEN_META for meta region This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on pull request #2029: HBASE-24688 AssignRegionHandler uses EventType.M_RS_CLOSE_META instea…
huaxiangsun commented on pull request #2029: URL: https://github.com/apache/hbase/pull/2029#issuecomment-654521294 A straightforward fix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152383#comment-17152383 ] Nick Dimiduk commented on HBASE-24625: -- >From a thread dump of {{TestWALEntryStream}} when the test gets killed {noformat} "Time-limited test" java.lang.Thread.State: TIMED_WAITING at java.base@11.0.4/java.lang.Object.wait(Native Method) at app//org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:142) at app//org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.blockOnSync(AbstractFSWAL.java:752) at app//org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:645) at app//org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:604) at app//org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream.appendToLogAndSync(TestWALEntryStream.java:581) at app//org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream.testDifferentCounts(TestWALEntryStream.java:161) {noformat} > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2029: HBASE-24688 AssignRegionHandler uses EventType.M_RS_CLOSE_META instea…
Apache-HBase commented on pull request #2029: URL: https://github.com/apache/hbase/pull/2029#issuecomment-654533110 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 21s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 48s | master passed | | +1 :green_heart: | checkstyle | 1m 6s | master passed | | +1 :green_heart: | spotbugs | 1m 58s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 21s | the patch passed | | +1 :green_heart: | checkstyle | 1m 5s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 1s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 36m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2029 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 5c79b791e4f2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a1d7e6e253 | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24689) Generate CHANGES.md and RELEASENOTES.md for 2.2.6
Guanghao Zhang created HBASE-24689: -- Summary: Generate CHANGES.md and RELEASENOTES.md for 2.2.6 Key: HBASE-24689 URL: https://issues.apache.org/jira/browse/HBASE-24689 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24690) Set version to 2.2.6 in branch-2.2 for first RC of 2.2.6
Guanghao Zhang created HBASE-24690: -- Summary: Set version to 2.2.6 in branch-2.2 for first RC of 2.2.6 Key: HBASE-24690 URL: https://issues.apache.org/jira/browse/HBASE-24690 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24691) Fix flaky TestWALEntryStream
Guanghao Zhang created HBASE-24691: -- Summary: Fix flaky TestWALEntryStream Key: HBASE-24691 URL: https://issues.apache.org/jira/browse/HBASE-24691 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/dashboard.html] Failed 100.0% (13 / 13) recently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24663) Add procedure process time statistics UI
[ https://issues.apache.org/jira/browse/HBASE-24663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang reassigned HBASE-24663: -- Assignee: Junhong Xu > Add procedure process time statistics UI > > > Key: HBASE-24663 > URL: https://issues.apache.org/jira/browse/HBASE-24663 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Junhong Xu >Priority: Major > > Added in "Procedures & Locks" jsp. > For the first version UI, we care about the process time of > ServerCrashProcedure, TRSP, OpenRegionProcedure and CloseRegionProcedure. > Plan to show the avg/P50/P90/min/max process time of these procedures. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24691) Fix flaky TestWALEntryStream
[ https://issues.apache.org/jira/browse/HBASE-24691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang reassigned HBASE-24691: -- Assignee: Guanghao Zhang > Fix flaky TestWALEntryStream > > > Key: HBASE-24691 > URL: https://issues.apache.org/jira/browse/HBASE-24691 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/dashboard.html] > > Failed 100.0% (13 / 13) recently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24663) Add procedure process time statistics UI
[ https://issues.apache.org/jira/browse/HBASE-24663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152407#comment-17152407 ] Guanghao Zhang commented on HBASE-24663: Assigned to you [~Joseph295]. > Add procedure process time statistics UI > > > Key: HBASE-24663 > URL: https://issues.apache.org/jira/browse/HBASE-24663 > Project: HBase > Issue Type: Improvement >Reporter: Guanghao Zhang >Assignee: Junhong Xu >Priority: Major > > Added in "Procedures & Locks" jsp. > For the first version UI, we care about the process time of > ServerCrashProcedure, TRSP, OpenRegionProcedure and CloseRegionProcedure. > Plan to show the avg/P50/P90/min/max process time of these procedures. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24615) MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket.
[ https://issues.apache.org/jira/browse/HBASE-24615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152415#comment-17152415 ] wenfeiyi666 commented on HBASE-24615: - thanks, I go it. > MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the > distribution for last bucket. > > > Key: HBASE-24615 > URL: https://issues.apache.org/jira/browse/HBASE-24615 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 2.3.0, master, 1.3.7, 2.2.6 >Reporter: Rushabh Shah >Assignee: wenfeiyi666 >Priority: Major > > We are not processing the distribution for last bucket. > https://github.com/apache/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java#L70 > {code:java} > public void updateSnapshotRangeMetrics(MetricsRecordBuilder > metricsRecordBuilder, > Snapshot snapshot) { > long priorRange = 0; > long cumNum = 0; > final long[] ranges = getRanges(); > final String rangeType = getRangeType(); > for (int i = 0; i < ranges.length - 1; i++) { -> The bug lies > here. We are not processing last bucket. > long val = snapshot.getCountAtOrBelow(ranges[i]); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + priorRange + "-" + > ranges[i], desc), > val - cumNum); > } > priorRange = ranges[i]; > cumNum = val; > } > long val = snapshot.getCount(); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - > 1] + "-inf", desc), > val - cumNum); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on a change in pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
Apache9 commented on a change in pull request #1970: URL: https://github.com/apache/hbase/pull/1970#discussion_r450569339 ## File path: hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/WrapperAsyncFSOutput.java ## @@ -91,7 +93,11 @@ private void flush0(CompletableFuture future, ByteArrayOutputStream buffer out.hflush(); } } - future.complete(out.getPos()); + long pos = out.getPos(); + if(pos > this.syncedLength) { +this.syncedLength = pos; Review comment: This one is just for test so not a big problem but aligning with other producation implementations is better. Can have an addendum. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152424#comment-17152424 ] Hudson commented on HBASE-24546: Results for branch branch-2 [build #2736 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2736/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2736/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2736/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2736/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2736/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24684) Fetch ReplicationSink servers list from HMaster instead of ZooKeeper
[ https://issues.apache.org/jira/browse/HBASE-24684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sun Xin reassigned HBASE-24684: --- Assignee: Sun Xin > Fetch ReplicationSink servers list from HMaster instead of ZooKeeper > > > Key: HBASE-24684 > URL: https://issues.apache.org/jira/browse/HBASE-24684 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Sun Xin >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24683) Add a basic ReplicationServer which only implement ReplicationSink Service
[ https://issues.apache.org/jira/browse/HBASE-24683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sun Xin reassigned HBASE-24683: --- Assignee: Sun Xin > Add a basic ReplicationServer which only implement ReplicationSink Service > -- > > Key: HBASE-24683 > URL: https://issues.apache.org/jira/browse/HBASE-24683 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Sun Xin >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on a change in pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
Apache9 commented on a change in pull request #1970: URL: https://github.com/apache/hbase/pull/1970#discussion_r450577129 ## File path: hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/WrapperAsyncFSOutput.java ## @@ -91,7 +93,11 @@ private void flush0(CompletableFuture future, ByteArrayOutputStream buffer out.hflush(); } } - future.complete(out.getPos()); + long pos = out.getPos(); + if(pos > this.syncedLength) { +this.syncedLength = pos; Review comment: Oh, reviewed the code again, actuall, the flush0 method can only be executed in a single thread so no need to use AtomicUtils.updateMax. The AtomicLong is in the ProtobufLogWriter, not the output stream. But the 'if(pos > this.syncedLength) {' is a bit confusing to developers, I prefer we just remove this check... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24691) Fix flaky TestWALEntryStream
[ https://issues.apache.org/jira/browse/HBASE-24691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152437#comment-17152437 ] Guanghao Zhang commented on HBASE-24691: {code:java} 2020-07-07 09:17:39,962 INFO [Time-limited test] regionserver.ReplicationSourceWALReader(115): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=10, replicationBatchQueueCapacity=1 2020-07-07 09:17:39,978 DEBUG [Thread-196] regionserver.WALEntryStream(251): Reached the end of log hdfs://localhost:44204/home/hao/open_source/hbase/hbase-server/target/test-data/3ee454c8-b764-b7c1-6312-9819838ebf2a/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1594084659042 Exception in thread "Thread-196" java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.getSyncedLength(AsyncProtobufLogWriter.java:237) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.getLogFileSizeIfBeingWritten(AbstractFSWAL.java:1017) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:188) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:101) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:192) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:138) 2020-07-07 09:30:30,177 DEBUG [Time-limited test] wal.AbstractFSWAL(858): Moved 2 WAL file(s) to /home/hao/open_source/hbase/hbase-server/target/test-data/3ee454c8-b764-b7c1-6312-9819838ebf2a/oldWALs {code} Got NPE and the thread terminated. > Fix flaky TestWALEntryStream > > > Key: HBASE-24691 > URL: https://issues.apache.org/jira/browse/HBASE-24691 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2.2/lastSuccessfulBuild/artifact/dashboard.html] > > Failed 100.0% (13 / 13) recently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bsglz commented on pull request #1909: HBASE-24569 Get hostAndWeights in addition using localhost if it is n…
bsglz commented on pull request #1909: URL: https://github.com/apache/hbase/pull/1909#issuecomment-654562715 The hostAndWeights stores weight of hosts for a region, currently the hosts in special mode has a bit diff, shows below. ``` #local mode(run by IDEA in windows) "localhost" -> xxx #distributed mode "hostA" -> xxx "hostB" -> yyy "hostC" -> zzz ``` In local mode, we can not get the weight now, and the code I added could solve it. In distributed mode, this code would not make effect, because no host named "localhost". Thanks. @Apache9 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24546) CloneSnapshotProcedure unlimited retry
[ https://issues.apache.org/jira/browse/HBASE-24546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152441#comment-17152441 ] Hudson commented on HBASE-24546: Results for branch branch-2.2 [build #907 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/907/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/907//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/907//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/907//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CloneSnapshotProcedure unlimited retry > -- > > Key: HBASE-24546 > URL: https://issues.apache.org/jira/browse/HBASE-24546 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0, master, 2.2.5 >Reporter: wenfeiyi666 >Assignee: wenfeiyi666 >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6 > > > since regions dir was not remove in the previous execution created, need to > be remove when retrying, resulting in exception, unlimited retry > {code:java} > procedure.CloneSnapshotProcedure: Retriable error trying to clone > snapshot=snapshot_test to table=test:backup > state=CLONE_SNAPSHOT_WRITE_FS_LAYOUT > org.apache.hadoop.hbase.snapshot.RestoreSnapshotException: clone snapshot={ > ss=snapshot_test table=test:backup type=FLUSH } failed because A clone should > not have regions to remove > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:434) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFsLayout(CloneSnapshotProcedure.java:465) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.createFilesystemLayout(CloneSnapshotProcedure.java:392) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure.executeFromState(CloneSnapshotProcedure.java:67) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1662) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1409) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1979) > Caused by: java.lang.IllegalArgumentException: A clone should not have > regions to remove > at > org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142) > at > org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createHdfsRegions(CloneSnapshotProcedure.java:418) > ... 10 more > {code} > and the cloned regions name are unchanged, resulting in new created regions > be removed when retrying -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2029: HBASE-24688 AssignRegionHandler uses EventType.M_RS_CLOSE_META instea…
Apache-HBase commented on pull request #2029: URL: https://github.com/apache/hbase/pull/2029#issuecomment-654565032 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | compile | 1m 6s | master passed | | +1 :green_heart: | shadedjars | 5m 51s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | shadedjars | 5m 39s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 128m 48s | hbase-server in the patch failed. | | | | 158m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2029 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3336e6026794 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a1d7e6e253 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/testReport/ | | Max. process+thread count | 3925 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2029: HBASE-24688 AssignRegionHandler uses EventType.M_RS_CLOSE_META instea…
Apache-HBase commented on pull request #2029: URL: https://github.com/apache/hbase/pull/2029#issuecomment-654566841 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 5s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | master passed | | +1 :green_heart: | compile | 0m 54s | master passed | | +1 :green_heart: | shadedjars | 5m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 24s | the patch passed | | +1 :green_heart: | compile | 0m 55s | the patch passed | | +1 :green_heart: | javac | 0m 55s | the patch passed | | +1 :green_heart: | shadedjars | 5m 32s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 137m 17s | hbase-server in the patch passed. | | | | 164m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2029 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0fa3aacbcf97 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a1d7e6e253 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/testReport/ | | Max. process+thread count | 4385 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2029/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2016: HBASE-24653 Show snapshot owner on Master WebUI
Apache-HBase commented on pull request #2016: URL: https://github.com/apache/hbase/pull/2016#issuecomment-654568086 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 7m 45s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | ||| _ Patch Compile Tests _ | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 9m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2016 | | Optional Tests | dupname asflicense | | uname | Linux 4a328f5de702 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a1d7e6e253 | | Max. process+thread count | 43 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org