[GitHub] [hbase] Apache-HBase commented on pull request #1814: HBASE-20904 Prometheus /metrics http endpoint for monitoring
Apache-HBase commented on pull request #1814: URL: https://github.com/apache/hbase/pull/1814#issuecomment-653849751 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 25s | master passed | | +1 :green_heart: | checkstyle | 1m 57s | master passed | | +0 :ok: | refguide | 4m 58s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | spotbugs | 10m 9s | master passed | | -0 :warning: | patch | 9m 1s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | -0 :warning: | checkstyle | 1m 55s | root: The patch generated 29 new + 7 unchanged - 0 fixed = 36 total (was 7) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +0 :ok: | refguide | 5m 2s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | hadoopcheck | 10m 59s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 10m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 52s | The patch does not generate ASF License warnings. | | | | 62m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1814/11/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1814 | | Optional Tests | dupname asflicense refguide xml spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 563604ed26d0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / a7a0e1a596 | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1814/11/artifact/yetus-general-check/output/branch-site/book.html | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1814/11/artifact/yetus-general-check/output/diff-checkstyle-root.txt | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1814/11/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 137 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-hadoop-compat hbase-http . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1814/11/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24659) Calculate FIXED_OVERHEAD automatically
[ https://issues.apache.org/jira/browse/HBASE-24659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24659: -- Summary: Calculate FIXED_OVERHEAD automatically (was: Calcuate FIXED_OVERHEAD automatically) > Calculate FIXED_OVERHEAD automatically > -- > > Key: HBASE-24659 > URL: https://issues.apache.org/jira/browse/HBASE-24659 > Project: HBase > Issue Type: Improvement >Reporter: Duo Zhang >Assignee: niuyulin >Priority: Major > > Now the FIXED_OVERHEAD in some classes are maintained manually, an we have a > method to TestHeapSizes to confirm that the value is correct. > But it is really hard for developers to count the fields in a complicated > class like HRegion. Since we have the ability to calcuate the accurate size > in UT, I think we it is also possible to calcuate it when loading the class, > which is a one time operation so should not effect the performance too much. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2017: HBASE-24669 Logging of ppid should be consistent across all occurrences
Apache-HBase commented on pull request #2017: URL: https://github.com/apache/hbase/pull/2017#issuecomment-653836591 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 21s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 14s | branch-2 passed | | +1 :green_heart: | spotbugs | 0m 33s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | checkstyle | 0m 13s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 40s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 0m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 31m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2017 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux e031f3664a69 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / f834919929 | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-procedure U: hbase-procedure | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2017: HBASE-24669 Logging of ppid should be consistent across all occurrences
Apache-HBase commented on pull request #2017: URL: https://github.com/apache/hbase/pull/2017#issuecomment-653836459 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 55s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 10s | branch-2 passed | | +1 :green_heart: | compile | 0m 23s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 31s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-procedure in branch-2 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 38s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | shadedjars | 6m 35s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-procedure in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 58s | hbase-procedure in the patch passed. | | | | 29m 23s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2017 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 11fbfe2dbdc2 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / f834919929 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-procedure.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-procedure.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/testReport/ | | Max. process+thread count | 219 (vs. ulimit of 12500) | | modules | C: hbase-procedure U: hbase-procedure | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2017: HBASE-24669 Logging of ppid should be consistent across all occurrences
Apache-HBase commented on pull request #2017: URL: https://github.com/apache/hbase/pull/2017#issuecomment-653836281 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 50s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 23s | branch-2 passed | | +1 :green_heart: | compile | 0m 18s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 39s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | the patch passed | | +1 :green_heart: | compile | 0m 19s | the patch passed | | +1 :green_heart: | javac | 0m 19s | the patch passed | | +1 :green_heart: | shadedjars | 5m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 5s | hbase-procedure in the patch passed. | | | | 25m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2017 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0e35c0cece8b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / f834919929 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/testReport/ | | Max. process+thread count | 247 (vs. ulimit of 12500) | | modules | C: hbase-procedure U: hbase-procedure | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2017/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] nyl3532016 opened a new pull request #2017: HBASE-24669 Logging of ppid should be consistent across all occurrences
nyl3532016 opened a new pull request #2017: URL: https://github.com/apache/hbase/pull/2017 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-11288) Splittable Meta
[ https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151444#comment-17151444 ] Hudson commented on HBASE-11288: Results for branch HBASE-11288.splittable-meta [build #14 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/14/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/14/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/14/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/14/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Splittable Meta > --- > > Key: HBASE-11288 > URL: https://issues.apache.org/jira/browse/HBASE-11288 > Project: HBase > Issue Type: Umbrella > Components: meta >Reporter: Francis Christopher Liu >Assignee: Francis Christopher Liu >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151434#comment-17151434 ] Hudson commented on HBASE-24625: Results for branch branch-2.2 [build #906 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24635) Split TestMetaWithReplicas
[ https://issues.apache.org/jira/browse/HBASE-24635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151435#comment-17151435 ] Hudson commented on HBASE-24635: Results for branch branch-2.2 [build #906 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/906//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Split TestMetaWithReplicas > -- > > Key: HBASE-24635 > URL: https://issues.apache.org/jira/browse/HBASE-24635 > Project: HBase > Issue Type: Task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > It will stop and then start a mini cluster every time after each test method, > so let's just split them into individual test files. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151433#comment-17151433 ] Hudson commented on HBASE-24625: Results for branch branch-2 [build #2733 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2733/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2733/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2733/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2733/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2733/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2010: HBASE-24391 Implement meta split
Apache-HBase commented on pull request #2010: URL: https://github.com/apache/hbase/pull/2010#issuecomment-653799657 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-11288.splittable-meta Compile Tests _ | | +0 :ok: | mvndep | 0m 42s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | compile | 1m 41s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | shadedjars | 5m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 14s | HBASE-11288.splittable-meta passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | compile | 1m 40s | the patch passed | | +1 :green_heart: | javac | 1m 40s | the patch passed | | +1 :green_heart: | shadedjars | 5m 40s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 7s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 22s | hbase-balancer in the patch passed. | | -1 :x: | unit | 158m 54s | hbase-server in the patch failed. | | | | 188m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2010 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux db7713e06b66 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-11288.splittable-meta / 404c5ff37b | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/testReport/ | | Max. process+thread count | 3868 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-balancer hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2010: HBASE-24391 Implement meta split
Apache-HBase commented on pull request #2010: URL: https://github.com/apache/hbase/pull/2010#issuecomment-653799155 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-11288.splittable-meta Compile Tests _ | | +0 :ok: | mvndep | 0m 42s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 21s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | compile | 1m 58s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | shadedjars | 5m 53s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 27s | hbase-client in HBASE-11288.splittable-meta failed. | | -0 :warning: | javadoc | 0m 18s | hbase-balancer in HBASE-11288.splittable-meta failed. | | -0 :warning: | javadoc | 0m 45s | hbase-server in HBASE-11288.splittable-meta failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 9s | the patch passed | | +1 :green_heart: | compile | 1m 54s | the patch passed | | +1 :green_heart: | javac | 1m 54s | the patch passed | | +1 :green_heart: | shadedjars | 6m 11s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 26s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 17s | hbase-balancer in the patch failed. | | -0 :warning: | javadoc | 0m 50s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 29s | hbase-balancer in the patch passed. | | -1 :x: | unit | 149m 21s | hbase-server in the patch failed. | | | | 182m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2010 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 87dca56a06c9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-11288.splittable-meta / 404c5ff37b | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-balancer.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-balancer.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/testReport/ | | Max. process+thread count | 4031 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-balancer hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2010: HBASE-24391 Implement meta split
Apache-HBase commented on pull request #2010: URL: https://github.com/apache/hbase/pull/2010#issuecomment-653785551 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-11288.splittable-meta Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | checkstyle | 1m 51s | HBASE-11288.splittable-meta passed | | +1 :green_heart: | spotbugs | 3m 24s | HBASE-11288.splittable-meta passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 20s | the patch passed | | -0 :warning: | checkstyle | 1m 11s | hbase-server: The patch generated 3 new + 241 unchanged - 3 fixed = 244 total (was 244) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 16s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 39s | The patch does not generate ASF License warnings. | | | | 38m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2010 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 7ab518d6b52f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-11288.splittable-meta / 404c5ff37b | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-balancer hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2010/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #1991: HBASE-24650 Change the return types of the new checkAndMutate methods…
Apache9 commented on a change in pull request #1991: URL: https://github.com/apache/hbase/pull/1991#discussion_r449781947 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java ## @@ -497,7 +497,7 @@ public void run(MultiResponse resp) { "Failed to mutate row: " + Bytes.toStringBinary(mutation.getRow()), ex)); } else { future.complete(respConverter - .apply((Result) multiResp.getResults().get(regionName).result.get(0))); + .apply((RES) multiResp.getResults().get(regionName).result.get(0))); Review comment: OK, so the problem here is that, the result for a multi operation is an Object? And we will have two types, one is Result, for normal mutateRow, and the other is CheckAndMutateResult, for checkAndMutate? ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableOverAsyncTable.java ## @@ -300,13 +300,16 @@ public boolean thenMutate(RowMutations mutation) throws IOException { } @Override - public boolean checkAndMutate(CheckAndMutate checkAndMutate) throws IOException { + public CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) throws IOException { return FutureUtils.get(table.checkAndMutate(checkAndMutate)); } @Override - public boolean[] checkAndMutate(List checkAndMutates) throws IOException { -return Booleans.toArray(FutureUtils.get(table.checkAndMutateAll(checkAndMutates))); + public CheckAndMutateResult[] checkAndMutate(List checkAndMutates) Review comment: Better to return a List? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24635) Split TestMetaWithReplicas
[ https://issues.apache.org/jira/browse/HBASE-24635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-24635. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.2+. Thanks [~zghao] for reviewing. > Split TestMetaWithReplicas > -- > > Key: HBASE-24635 > URL: https://issues.apache.org/jira/browse/HBASE-24635 > Project: HBase > Issue Type: Task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > It will stop and then start a mini cluster every time after each test method, > so let's just split them into individual test files. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24635) Split TestMetaWithReplicas
[ https://issues.apache.org/jira/browse/HBASE-24635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24635: -- Fix Version/s: 2.2.6 > Split TestMetaWithReplicas > -- > > Key: HBASE-24635 > URL: https://issues.apache.org/jira/browse/HBASE-24635 > Project: HBase > Issue Type: Task > Components: test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > It will stop and then start a mini cluster every time after each test method, > so let's just split them into individual test files. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-24625: - Assignee: chenglei > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24625: -- Fix Version/s: 2.4.0 > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24625: -- Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.2+. Thanks [~comnetwork] for contributing. > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151184#comment-17151184 ] chenglei edited comment on HBASE-24625 at 7/4/20, 1:56 PM: --- [~busbey], yes, all 2.x.y versions are impacted. Release note is added. was (Author: comnetwork): [~busbey], yes, all 2.x.y versions are impacted, and release note is added. > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151184#comment-17151184 ] chenglei edited comment on HBASE-24625 at 7/4/20, 1:56 PM: --- [~busbey], yes, all 2.x.y versions are impacted, and release note is added. was (Author: comnetwork): [~busbey], yes, all 2.x.y versions are impacted. > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-24625: - Release Note: We add a method getSyncedLength in WALProvider.WriterBase interface for WALFileLengthProvider used for replication, considering the case if we use AsyncFSWAL,we write to 3 DNs concurrently,according to the visibility guarantee of HDFS, the data will be available immediately when arriving at DN since all the DNs will be considered as the last one in pipeline.This means replication may read uncommitted data and replicate it to the remote cluster and cause data inconsistency.The method WriterBase#getLength may return length which just in hdfs client buffer and not successfully synced to HDFS, so we use this method WriterBase#getSyncedLength to return the length successfully synced to HDFS and replication thread could only read writing WAL file limited by this length. see also HBASE-14004 and this document for more details: https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit# Before this patch, replication may read uncommitted data and replicate it to the slave cluster and cause data inconsistency between master and slave cluster, we could use FSHLog instead of AsyncFSWAL to reduce probability of inconsistency without this patch applied. was: We add a method {{getSyncedLength}} in {{WALProvider.WriterBase}} interface for {{WALFileLengthProvider}} used for replication, considering the case if we use {{AsyncFSWAL}},we write to 3 DNs concurrently,according to the visibility guarantee of HDFS, the data will be available immediately when arriving at DN since all the DNs will be considered as the last one in pipeline.This means replication may read uncommitted data and replicate it to the remote cluster and cause data inconsistency.The method {{WriterBase#getLength}} may return length which just in hdfs client buffer and not successfully synced to HDFS, so we use this method {{WriterBase#getSyncedLength}} to return the length successfully synced to HDFS and replication thread could only read writing WAL file limited by this length. see also HBASE-14004 and this document for more details: https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit# Before this patch, replication may read uncommitted data and replicate it to the slave cluster and cause data inconsistency between master and slave cluster, we could use {{FSHLog}} instead of {{AsyncFSWAL}} to reduce probability of inconsistency without this patch. > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0 >Reporter: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOExcepti
[jira] [Updated] (HBASE-24625) AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
[ https://issues.apache.org/jira/browse/HBASE-24625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-24625: - Release Note: We add a method {{getSyncedLength}} in {{WALProvider.WriterBase}} interface for {{WALFileLengthProvider}} used for replication, considering the case if we use {{AsyncFSWAL}},we write to 3 DNs concurrently,according to the visibility guarantee of HDFS, the data will be available immediately when arriving at DN since all the DNs will be considered as the last one in pipeline.This means replication may read uncommitted data and replicate it to the remote cluster and cause data inconsistency.The method {{WriterBase#getLength}} may return length which just in hdfs client buffer and not successfully synced to HDFS, so we use this method {{WriterBase#getSyncedLength}} to return the length successfully synced to HDFS and replication thread could only read writing WAL file limited by this length. see also HBASE-14004 and this document for more details: https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit# Before this patch, replication may read uncommitted data and replicate it to the slave cluster and cause data inconsistency between master and slave cluster, we could use {{FSHLog}} instead of {{AsyncFSWAL}} to reduce probability of inconsistency without this patch. Status: Patch Available (was: Open) > AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced > file length. > > > Key: HBASE-24625 > URL: https://issues.apache.org/jira/browse/HBASE-24625 > Project: HBase > Issue Type: Bug > Components: Replication, wal >Affects Versions: 2.2.0, 2.0.0, 2.1.0, 2.3.0 >Reporter: chenglei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 2.2.6 > > > By HBASE-14004, we introduce {{WALFileLengthProvider}} interface to keep the > current writing wal file length by ourselves, {{WALEntryStream}} used by > {{ReplicationSourceWALReader}} could only read WAL file byte size <= > {{WALFileLengthProvider.getLogFileSizeIfBeingWritten}} if the WAL file is > current been writing on the same RegionServer . > {{AsyncFSWAL}} implements {{WALFileLengthProvider}} by > {{AbstractFSWAL.getLogFileSizeIfBeingWritten}}, just as folllows : > {code:java} >public OptionalLong getLogFileSizeIfBeingWritten(Path path) { > rollWriterLock.lock(); > try { > Path currentPath = getOldPath(); > if (path.equals(currentPath)) { > W writer = this.writer; > return writer != null ? OptionalLong.of(writer.getLength()) : > OptionalLong.empty(); > } else { > return OptionalLong.empty(); > } > } finally { > rollWriterLock.unlock(); > } > } > {code} > For {{AsyncFSWAL}}, above {{AsyncFSWAL.writer}} is > {{AsyncProtobufLogWriter}} ,and {{AsyncProtobufLogWriter.getLength}} is as > follows: > {code:java} > public long getLength() { > return length.get(); > } > {code} > But for {{AsyncProtobufLogWriter}}, any append method may increase the above > {{AsyncProtobufLogWriter.length}}, especially for following > {{AsyncFSWAL.append}} > method just appending the {{WALEntry}} to > {{FanOutOneBlockAsyncDFSOutput.buf}}: > {code:java} > public void append(Entry entry) { > int buffered = output.buffered(); > try { > entry.getKey(). > > getBuilder(compressor).setFollowingKvCount(entry.getEdit().size()).build() > .writeDelimitedTo(asyncOutputWrapper); > } catch (IOException e) { > throw new AssertionError("should not happen", e); > } > > try { >for (Cell cell : entry.getEdit().getCells()) { > cellEncoder.write(cell); >} > } catch (IOException e) { >throw new AssertionError("should not happen", e); > } > length.addAndGet(output.buffered() - buffered); > } > {code} > That is to say, {{AsyncFSWAL.getLogFileSizeIfBeingWritten}} could not reflect > the file length which successfully synced to underlying HDFS, which is not > as expected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
Apache9 merged pull request #1970: URL: https://github.com/apache/hbase/pull/1970 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sanjeetnishad95 commented on pull request #1635: HBASE-23996 Many split related metrics were present RS side but after split is moved to Master, these metrics are lost.
sanjeetnishad95 commented on pull request #1635: URL: https://github.com/apache/hbase/pull/1635#issuecomment-653753860 ping @saintstack . Any review comments on this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2016: HBASE-24653 Show snapshot owner on Master WebUI
Apache-HBase commented on pull request #2016: URL: https://github.com/apache/hbase/pull/2016#issuecomment-653747409 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 7s | master passed | | +1 :green_heart: | javadoc | 0m 38s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 43s | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 202m 17s | hbase-server in the patch passed. | | | | 214m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2016 | | Optional Tests | javac javadoc unit | | uname | Linux 9d631a760164 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e614b89c33 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/testReport/ | | Max. process+thread count | 2737 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2016: HBASE-24653 Show snapshot owner on Master WebUI
Apache-HBase commented on pull request #2016: URL: https://github.com/apache/hbase/pull/2016#issuecomment-653746081 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | master passed | | -0 :warning: | javadoc | 0m 42s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 27s | the patch passed | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 186m 15s | hbase-server in the patch passed. | | | | 198m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2016 | | Optional Tests | javac javadoc unit | | uname | Linux 751c82cbc1ba 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e614b89c33 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/testReport/ | | Max. process+thread count | 3043 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2016/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-24680) Refactor the checkAndMutate code on the server side
[ https://issues.apache.org/jira/browse/HBASE-24680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki reassigned HBASE-24680: Assignee: Toshihiro Suzuki > Refactor the checkAndMutate code on the server side > --- > > Key: HBASE-24680 > URL: https://issues.apache.org/jira/browse/HBASE-24680 > Project: HBase > Issue Type: Sub-task >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > > Refactor the checkAndMutate code on the server side by using the > CheckAndMutate class (introduced in HBASE-8458) and the CheckAndMutateResult > class (introduced in HBASE-24650). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24680) Refactor the checkAndMutate code on the server side
Toshihiro Suzuki created HBASE-24680: Summary: Refactor the checkAndMutate code on the server side Key: HBASE-24680 URL: https://issues.apache.org/jira/browse/HBASE-24680 Project: HBase Issue Type: Sub-task Reporter: Toshihiro Suzuki Refactor the checkAndMutate code on the server side by using the CheckAndMutate class (introduced in HBASE-8458) and the CheckAndMutateResult class (introduced in HBASE-24650). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24650) Change the return types of the new checkAndMutate methods introduced in HBASE-8458
[ https://issues.apache.org/jira/browse/HBASE-24650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-24650: - Summary: Change the return types of the new checkAndMutate methods introduced in HBASE-8458 (was: Change the return types of the new CheckAndMutate methods introduced in HBASE-8458) > Change the return types of the new checkAndMutate methods introduced in > HBASE-8458 > -- > > Key: HBASE-24650 > URL: https://issues.apache.org/jira/browse/HBASE-24650 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > To support CheckAndMutate with Increment/Append, the new CheckAndMutate > methods introduced in HBASE-8458 need to return the result of the specified > Increment/Append operation in addition to a boolean value represents whether > it's successful or not. Currently, the methods return only boolean value(s), > so we need to change the return types of the methods. The methods are > unreleased yet currently, so I think it's no problem to change the return > types of the methods. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24650) Change the return types of the new checkAndMutate methods introduced in HBASE-8458
[ https://issues.apache.org/jira/browse/HBASE-24650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-24650: - Description: To support CheckAndMutate with Increment/Append, the new checkAndMutate methods introduced in HBASE-8458 need to return the result of the specified Increment/Append operation in addition to a boolean value represents whether it's successful or not. Currently, the methods return only boolean value(s), so we need to change the return types of the methods. The methods are unreleased yet currently, so I think it's no problem to change the return types of the methods. was: To support CheckAndMutate with Increment/Append, the new CheckAndMutate methods introduced in HBASE-8458 need to return the result of the specified Increment/Append operation in addition to a boolean value represents whether it's successful or not. Currently, the methods return only boolean value(s), so we need to change the return types of the methods. The methods are unreleased yet currently, so I think it's no problem to change the return types of the methods. > Change the return types of the new checkAndMutate methods introduced in > HBASE-8458 > -- > > Key: HBASE-24650 > URL: https://issues.apache.org/jira/browse/HBASE-24650 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0 > > > To support CheckAndMutate with Increment/Append, the new checkAndMutate > methods introduced in HBASE-8458 need to return the result of the specified > Increment/Append operation in addition to a boolean value represents whether > it's successful or not. Currently, the methods return only boolean value(s), > so we need to change the return types of the methods. The methods are > unreleased yet currently, so I think it's no problem to change the return > types of the methods. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
Apache-HBase commented on pull request #1970: URL: https://github.com/apache/hbase/pull/1970#issuecomment-653736081 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | master passed | | +1 :green_heart: | compile | 1m 16s | master passed | | +1 :green_heart: | shadedjars | 5m 36s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 53s | master passed | | -0 :warning: | patch | 6m 51s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 25s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | +1 :green_heart: | javac | 1m 13s | the patch passed | | +1 :green_heart: | shadedjars | 5m 31s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 31s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 137m 49s | hbase-server in the patch passed. | | | | 165m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1970 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b8cee7d31178 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e614b89c33 | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/testReport/ | | Max. process+thread count | 4405 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-23634) Enable "Split WAL to HFile" by default
[ https://issues.apache.org/jira/browse/HBASE-23634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151226#comment-17151226 ] Anoop Sam John commented on HBASE-23634: No need right? The WAL splitter will create the HFiles under the recovered edits path. Now to make these file are official HFiles (move to under the region/cf directory) is the responsibility of primary regions. Only the primary region should do this. This will happen when the primary region is opened. Only after that the replica regions will come to know abt these new files and its refresher will add those to its list of files (in memory). > Enable "Split WAL to HFile" by default > -- > > Key: HBASE-23634 > URL: https://issues.apache.org/jira/browse/HBASE-23634 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Guanghao Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.
Apache-HBase commented on pull request #1970: URL: https://github.com/apache/hbase/pull/1970#issuecomment-653735490 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 11s | master passed | | +1 :green_heart: | compile | 1m 24s | master passed | | +1 :green_heart: | shadedjars | 5m 43s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-asyncfs in master failed. | | -0 :warning: | javadoc | 0m 38s | hbase-server in master failed. | | -0 :warning: | patch | 7m 4s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 58s | the patch passed | | +1 :green_heart: | compile | 1m 26s | the patch passed | | +1 :green_heart: | javac | 1m 26s | the patch passed | | +1 :green_heart: | shadedjars | 5m 45s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 16s | hbase-asyncfs in the patch failed. | | -0 :warning: | javadoc | 0m 38s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 25s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 128m 38s | hbase-server in the patch passed. | | | | 157m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1970 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ef12e9adeb43 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e614b89c33 | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/testReport/ | | Max. process+thread count | 4032 (vs. ulimit of 12500) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/8/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2013: HBASE-24671 Add excludefile and designatedfile options to graceful_stop.sh
Apache-HBase commented on pull request #2013: URL: https://github.com/apache/hbase/pull/2013#issuecomment-653734785 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | master passed | | +0 :ok: | refguide | 4m 45s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 30s | the patch passed | | -0 :warning: | shellcheck | 0m 0s | The patch generated 1 new + 37 unchanged - 4 fixed = 38 total (was 41) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +0 :ok: | refguide | 5m 18s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 19m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2013/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2013 | | Optional Tests | dupname asflicense shellcheck shelldocs refguide | | uname | Linux 9d248340c51e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / e614b89c33 | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2013/2/artifact/yetus-general-check/output/branch-site/book.html | | shellcheck | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2013/2/artifact/yetus-general-check/output/diff-patch-shellcheck.txt | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2013/2/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 78 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2013/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) shellcheck=0.4.6 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24679) HBase on Cloud Blob FS : Provide config to skip HFile archival while table deletion
Anoop Sam John created HBASE-24679: -- Summary: HBase on Cloud Blob FS : Provide config to skip HFile archival while table deletion Key: HBASE-24679 URL: https://issues.apache.org/jira/browse/HBASE-24679 Project: HBase Issue Type: Improvement Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 3.0.0-alpha-1, 2.4.0 When we delete a table as part of delete of table from FS, we do below things 1. Rename to table directory to come under /hbase/.tmp. This is an atomic rename op 2. Go through each of HFiles under every region:cf and archive that one by one. (Rename the file from .tmp path to go to /hbase/archive) 3. Delete the table dir under .tmp dir In case of HDFS this is not a big deal as every rename op is just a meta op (Though the HFiles archival is a costly only as there will be so many calls to NN based the table's regions# and total storesfiles#) But on Cloud blob based FS impl, this is a concerning op. Every rename will be a copy blob op. And we are doing it twice per each of the HFiles in this table ! The proposal here is to provide a config option (default to false) to skip this archival step. We can provide another config to even avoid the .tmp rename? The atomicity of the Table delete can be achieved by HM side procedure and proc WAL. In table delete the 1st step is to delete the table form META anyways -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23634) Enable "Split WAL to HFile" by default
[ https://issues.apache.org/jira/browse/HBASE-23634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151213#comment-17151213 ] ramkrishna.s.vasudevan commented on HBASE-23634: Right. I meant async refresh when we do are we also ensuring these files are considered (the files in the recovered path)? In the region open case we are ensuring while Hstore opens so similarly we should be doing in the refresh path also. > Enable "Split WAL to HFile" by default > -- > > Key: HBASE-23634 > URL: https://issues.apache.org/jira/browse/HBASE-23634 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Guanghao Zhang >Priority: Blocker > Fix For: 3.0.0-alpha-1, 2.4.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2013: HBASE-24671 Add excludefile and designatedfile options to graceful_stop.sh
Apache-HBase commented on pull request #2013: URL: https://github.com/apache/hbase/pull/2013#issuecomment-653733314 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24678) Add Bulk load param details into its responseTooSlow log
[ https://issues.apache.org/jira/browse/HBASE-24678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151197#comment-17151197 ] Anoop Sam John commented on HBASE-24678: For all RPC requests, we have only common config to set the warn time. (default value 10 sec). Should we have different time limits for diff type of ops? At least for bulk load kind of ops, it might need comparably more time? Anyways let me do the logging improvement as 1st cut. > Add Bulk load param details into its responseTooSlow log > > > Key: HBASE-24678 > URL: https://issues.apache.org/jira/browse/HBASE-24678 > Project: HBase > Issue Type: Improvement >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > > Right now the log will come like > {code} > (responseTooSlow): > {"call":"BulkLoadHFile(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$BulkLoadHFileRequest)","starttimems":1593820455043,"responsesize":2,"method":"BulkLoadHFile","param":"TODO: > class > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$BulkLoadHFileRequest",..} > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24678) Add Bulk load param details into its responseTooSlow log
Anoop Sam John created HBASE-24678: -- Summary: Add Bulk load param details into its responseTooSlow log Key: HBASE-24678 URL: https://issues.apache.org/jira/browse/HBASE-24678 Project: HBase Issue Type: Improvement Reporter: Anoop Sam John Assignee: Anoop Sam John Right now the log will come like {code} (responseTooSlow): {"call":"BulkLoadHFile(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$BulkLoadHFileRequest)","starttimems":1593820455043,"responsesize":2,"method":"BulkLoadHFile","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$BulkLoadHFileRequest",..} {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)