[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872722743 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 8s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | compile | 1m 39s | master passed | | +1 :green_heart: | shadedjars | 8m 49s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 2s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 20s | the patch passed | | +1 :green_heart: | compile | 1m 35s | the patch passed | | +1 :green_heart: | javac | 1m 35s | the patch passed | | +1 :green_heart: | shadedjars | 8m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 42s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 134m 22s | hbase-server in the patch failed. | | | | 169m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 814b653eef54 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ef639ff083 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/testReport/ | | Max. process+thread count | 3876 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373233#comment-17373233 ] Zheng Wang commented on HBASE-26036: [~Xiaolin Ha]. Ok, get it. > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25902) 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is earlier than hbase-2.3.0 first
[ https://issues.apache.org/jira/browse/HBASE-25902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373229#comment-17373229 ] Anoop Sam John commented on HBASE-25902: bq.you must install an hbase2 that is earlier than hbase-2.3.0 first Now that we dont need such an install. mind updating title and also Release Notes as we will fail the HM first. Next time HM start will go through. > 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is > earlier than hbase-2.3.0 first > - > > Key: HBASE-25902 > URL: https://issues.apache.org/jira/browse/HBASE-25902 > Project: HBase > Issue Type: Bug > Components: meta, Operability >Affects Versions: 2.3.0, 2.4.0 >Reporter: Michael Stack >Assignee: Viraj Jasani >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: NoSuchColumnFamilyException.png > > > Making note of this issue in case others run into it. At my place of employ, > we tried to upgrade a cluster that was an hbase-1.2.x version to an > hbase-2.3.5 but it failed because meta didn't have the 'table' column family. > Up to 2.3.0, hbase:meta was hardcoded. HBASE-12035 added the 'table' CF for > hbase-2.0.0. HBASE-23782 (2.3.0) undid hardcoding of the hbase:meta schema; > i.e. reading hbase:meta schema from the filesystem. The hbase:meta schema is > only created on initial install. If an upgrade over existing data, the > hbase-1 hbase:meta will not be suitable for hbase-2.3.x context as it will be > missing columnfamilies needed to run (HBASE-23055 made it so hbase:meta could > be altered (2.3.0) but probably of no use since Master won't come up). > It would be a nice-to-have if a user could go from hbase1 to hbase.2.3.0 w/o > having to first install an hbase2 that is earlier than 2.3.0 but needs to be > demand before we would work on it; meantime, install an intermediate hbase2 > version before going to hbase-2.3.0+ if coming from hbase-1.x -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
Apache-HBase commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872709308 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 54s | master passed | | +1 :green_heart: | compile | 3m 59s | master passed | | +1 :green_heart: | checkstyle | 1m 19s | master passed | | +1 :green_heart: | spotbugs | 2m 52s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | the patch passed | | +1 :green_heart: | compile | 4m 0s | the patch passed | | +1 :green_heart: | javac | 4m 0s | the patch passed | | -0 :warning: | checkstyle | 1m 22s | hbase-server: The patch generated 1 new + 33 unchanged - 0 fixed = 34 total (was 33) | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 23m 5s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 57m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3451 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 05d21cee9d69 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / fab0505257 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3445: HBASE-26050 Remove the reflection used in FSUtils.isInSafeMode
Apache-HBase commented on pull request #3445: URL: https://github.com/apache/hbase/pull/3445#issuecomment-872707000 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 3s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 11m 3s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 38s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed | | +1 :green_heart: | javac | 1m 18s | the patch passed | | +1 :green_heart: | shadedjars | 10m 30s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 134m 13s | hbase-server in the patch passed. | | | | 172m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3445 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7c0514bed53f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ef639ff083 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/testReport/ | | Max. process+thread count | 3493 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3445: HBASE-26050 Remove the reflection used in FSUtils.isInSafeMode
Apache-HBase commented on pull request #3445: URL: https://github.com/apache/hbase/pull/3445#issuecomment-872701379 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 30s | master passed | | +1 :green_heart: | compile | 1m 10s | master passed | | +1 :green_heart: | shadedjars | 8m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 11s | the patch passed | | +1 :green_heart: | compile | 1m 14s | the patch passed | | +1 :green_heart: | javac | 1m 14s | the patch passed | | +1 :green_heart: | shadedjars | 8m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 123m 27s | hbase-server in the patch passed. | | | | 155m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3445 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 98556a0218dc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ef639ff083 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/testReport/ | | Max. process+thread count | 3476 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ZhaoBQ commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
ZhaoBQ commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872687773 Yes, I am planning to send a discussion email. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
Apache9 commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872686512 Suggest sending a discussion email to the dev and user mailing list of HBase to ask. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872684621 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 1s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 8s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 44s | master passed | | +1 :green_heart: | compile | 3m 59s | master passed | | +1 :green_heart: | checkstyle | 1m 26s | master passed | | +1 :green_heart: | spotbugs | 2m 43s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 55s | the patch passed | | +1 :green_heart: | compile | 3m 56s | the patch passed | | -0 :warning: | javac | 0m 30s | hbase-hadoop-compat generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) | | -0 :warning: | checkstyle | 1m 14s | hbase-server: The patch generated 27 new + 322 unchanged - 0 fixed = 349 total (was 322) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 59s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 54m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 954da4c3ac6a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ef639ff083 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/18/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sunhelly closed pull request #3449: HBASE-26036 DBB released too early in HRegion.get() and dirty data fo…
sunhelly closed pull request #3449: URL: https://github.com/apache/hbase/pull/3449 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373175#comment-17373175 ] Xiaolin Ha commented on HBASE-26036: Hi, [~filtertip] , thanks for reminding. This is not a same problem as in HBASE-25981, that is a monitor life circle problem. I referred HBASE-25187 because it used the cached row length and key length for the cell, which makes the program read dirty data, without this cache, the regionserver JVM will crash. > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on pull request #3443: HBASE-25516 [JDK17] reflective access Field.class.getDeclaredField("modifiers") not supported
Apache9 commented on pull request #3443: URL: https://github.com/apache/hbase/pull/3443#issuecomment-872681657 So do we need this for branch-2.x? Do we want to support JDK17 for 2.x? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-26051) Remove reflections used to access HDFS EC APIs
[ https://issues.apache.org/jira/browse/HBASE-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26051. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to master. Thanks [~weichiu] for reviewing. > Remove reflections used to access HDFS EC APIs > -- > > Key: HBASE-26051 > URL: https://issues.apache.org/jira/browse/HBASE-26051 > Project: HBase > Issue Type: Sub-task > Components: hadoop3 >Affects Versions: 3.0.0-alpha-1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0-alpha-2 > > > HDFS EC APIs exists since Hadoop 3.0. > We can access them directly in HBase 3.0 without reflections. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3446: HBASE-26051 Remove reflections used to access HDFS EC APIs
Apache9 merged pull request #3446: URL: https://github.com/apache/hbase/pull/3446 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373170#comment-17373170 ] Anoop Sam John commented on HBASE-25596: Thanks [~zhangduo] Its clear now. > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26051) Remove reflections used to access HDFS EC APIs
[ https://issues.apache.org/jira/browse/HBASE-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26051: -- Component/s: hadoop3 > Remove reflections used to access HDFS EC APIs > -- > > Key: HBASE-26051 > URL: https://issues.apache.org/jira/browse/HBASE-26051 > Project: HBase > Issue Type: Sub-task > Components: hadoop3 >Affects Versions: 3.0.0-alpha-1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0-alpha-2 > > > HDFS EC APIs exists since Hadoop 3.0. > We can access them directly in HBase 3.0 without reflections. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373167#comment-17373167 ] Duo Zhang commented on HBASE-25596: --- The patch for HBASE-25596 actually fixed the bug. The only left problem is that it did not follow some ideas which we introduced when refactoring this part of code for 2.x, which made the code a bit hard to read and understand, and also introduced the problem described in HBASE-25985. So I opened HBASE-25992 to polish the code on 2.x, to make it easier to read and understand, and also fix the problem of HBASE-25985 as a side effect. Thanks. > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373168#comment-17373168 ] Anoop Sam John commented on HBASE-26036: Basically the internal calls should reach to Region level for doing ops. Like Reads in case of checkAndXXX op etc. Even when CPs do ops All has to be (And will be ideally) at Region level not at RS level which will try to serialize the results (for over wire transfer) > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373165#comment-17373165 ] Xiaolin Ha commented on HBASE-26036: Hi, [~stack], the PR #3449 has reproduced the problem at [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3449/1/testReport/] > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26041) Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
[ https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26041. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to master and branch-2. Thanks [~weichiu] and [~stack]. > Replace PrintThreadInfoHelper with HBase's own > ReflectionUtils.printThreadInfo() > > > Key: HBASE-26041 > URL: https://issues.apache.org/jira/browse/HBASE-26041 > Project: HBase > Issue Type: Sub-task > Components: util >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > PrintThreadInfoLazyHolder uses reflection to access Hadoop's > ReflectionUtils.printThreadInfo(). Replace it with HBase's > ReflectionUtils.printThreadInfo(). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373161#comment-17373161 ] Anoop Sam John commented on HBASE-25596: That polish sub task is in right [~zhangduo]? You mean to say this PR is not fixing any issue? When u say polish in the sub task I thought its kind of making the code better only. Or u mean to say that PR only fixing actual root cause? Sorry got bit confused. Can u pls elaborate? Tks > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore
[ https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-26026: - Priority: Critical (was: Major) > HBase Write may be stuck forever when using CompactingMemStore > -- > > Key: HBASE-26026 > URL: https://issues.apache.org/jira/browse/HBASE-26026 > Project: HBase > Issue Type: Bug > Components: in-memory-compaction >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0 >Reporter: chenglei >Assignee: chenglei >Priority: Critical > > Sometimes I observed that HBase Write might be stuck in my hbase cluster > which enabling {{CompactingMemStore}}. I have simulated the problem by unit > test in my PR. > The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : > {code:java} > 425 private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell > cellToAdd, > 426 MemStoreSizing memstoreSizing) { > 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) { > 428 if (currActive.setInMemoryFlushed()) { > 429flushInMemory(currActive); > 430if (setInMemoryCompactionFlag()) { > 431 // The thread is dispatched to do in-memory compaction in the > background > .. > } > {code} > In line 427, if {{currActive.getDataSize}} adding the size of {{cellToAdd}} > exceeds {{CompactingMemStore.inmemoryFlushSize}}, then {{currActive}} should > be flushed, {{MutableSegment.setInMemoryFlushed()}} is invoked in above line > 428 : > {code:java} > public boolean setInMemoryFlushed() { > return flushed.compareAndSet(false, true); > } > {code} > After set {{currActive.flushed}} to true, in above line 429 > {{flushInMemory(currActive)}} invokes > {{CompactingMemStore.pushActiveToPipeline}} : > {code:java} > protected void pushActiveToPipeline(MutableSegment currActive) { > if (!currActive.isEmpty()) { > pipeline.pushHead(currActive); > resetActive(); > } > } > {code} > In above {{CompactingMemStore.pushActiveToPipeline}} method , if the > {{currActive.cellSet}} is empty, then nothing is done. Due to concurrent > writes and because we first add cell size to {{currActive.getDataSize}} and > then actually add cell to {{currActive.cellSet}}, it is possible that > {{currActive.getDataSize}} could not accommodate {{cellToAdd}} but > {{currActive.cellSet}} is still empty if pending writes which not yet add > cells to {{currActive.cellSet}}. > So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is > created, and new writes still continue target to {{currActive}}, but > {{currActive.flushed}} is true, {{currActive}} could not enter > {{flushInMemory(currActive)}} again,and new {{ActiveSegment}} could not be > created forever ! In the end all writes would be stuck. > In my opinion , once {{currActive.flushed}} is set true, it could not > continue use as {{ActiveSegment}} , and because of concurrent pending writes, > only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. > {{currActive.waitForUpdates}} is called) in > {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}} > is empty or not. > My fix is remove the {{if (!currActive.isEmpty())}} check here and left the > check to background {{InMemoryCompactionRunnable}} after > {{currActive.waitForUpdates}} is called. An alternative fix is we use > synchronization mechanism in {{checkAndAddToActiveSize}} method to prevent > all writes , wait for all pending write completed(i.e. > currActive.waitForUpdates is called) and if {{currActive}} is still empty > ,then we set {{currActive.flushed}} back to false,but I am not inclined to > use so heavy synchronization in write path, and I think we would better > maintain lockless implementation for {{CompactingMemStore.add}} method just > as now and {{currActive.waitForUpdates}} would better be left in background > {{InMemoryCompactionRunnable}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore
[ https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-26026: - Affects Version/s: 3.0.0-alpha-1 > HBase Write may be stuck forever when using CompactingMemStore > -- > > Key: HBASE-26026 > URL: https://issues.apache.org/jira/browse/HBASE-26026 > Project: HBase > Issue Type: Bug > Components: in-memory-compaction >Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.4.0 >Reporter: chenglei >Assignee: chenglei >Priority: Major > > Sometimes I observed that HBase Write might be stuck in my hbase cluster > which enabling {{CompactingMemStore}}. I have simulated the problem by unit > test in my PR. > The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : > {code:java} > 425 private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell > cellToAdd, > 426 MemStoreSizing memstoreSizing) { > 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) { > 428 if (currActive.setInMemoryFlushed()) { > 429flushInMemory(currActive); > 430if (setInMemoryCompactionFlag()) { > 431 // The thread is dispatched to do in-memory compaction in the > background > .. > } > {code} > In line 427, if {{currActive.getDataSize}} adding the size of {{cellToAdd}} > exceeds {{CompactingMemStore.inmemoryFlushSize}}, then {{currActive}} should > be flushed, {{MutableSegment.setInMemoryFlushed()}} is invoked in above line > 428 : > {code:java} > public boolean setInMemoryFlushed() { > return flushed.compareAndSet(false, true); > } > {code} > After set {{currActive.flushed}} to true, in above line 429 > {{flushInMemory(currActive)}} invokes > {{CompactingMemStore.pushActiveToPipeline}} : > {code:java} > protected void pushActiveToPipeline(MutableSegment currActive) { > if (!currActive.isEmpty()) { > pipeline.pushHead(currActive); > resetActive(); > } > } > {code} > In above {{CompactingMemStore.pushActiveToPipeline}} method , if the > {{currActive.cellSet}} is empty, then nothing is done. Due to concurrent > writes and because we first add cell size to {{currActive.getDataSize}} and > then actually add cell to {{currActive.cellSet}}, it is possible that > {{currActive.getDataSize}} could not accommodate {{cellToAdd}} but > {{currActive.cellSet}} is still empty if pending writes which not yet add > cells to {{currActive.cellSet}}. > So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is > created, and new writes still continue target to {{currActive}}, but > {{currActive.flushed}} is true, {{currActive}} could not enter > {{flushInMemory(currActive)}} again,and new {{ActiveSegment}} could not be > created forever ! In the end all writes would be stuck. > In my opinion , once {{currActive.flushed}} is set true, it could not > continue use as {{ActiveSegment}} , and because of concurrent pending writes, > only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. > {{currActive.waitForUpdates}} is called) in > {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}} > is empty or not. > My fix is remove the {{if (!currActive.isEmpty())}} check here and left the > check to background {{InMemoryCompactionRunnable}} after > {{currActive.waitForUpdates}} is called. An alternative fix is we use > synchronization mechanism in {{checkAndAddToActiveSize}} method to prevent > all writes , wait for all pending write completed(i.e. > currActive.waitForUpdates is called) and if {{currActive}} is still empty > ,then we set {{currActive.flushed}} back to false,but I am not inclined to > use so heavy synchronization in write path, and I think we would better > maintain lockless implementation for {{CompactingMemStore.add}} method just > as now and {{currActive.waitForUpdates}} would better be left in background > {{InMemoryCompactionRunnable}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ZhaoBQ commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
ZhaoBQ commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872673609 Thanks @Apache9 for the review. Yes, it's a behavior change. However, setting it to -1 will immediately expires, and it doesn't make any sense. I haven't thought of what kind of scene would be used in this way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373158#comment-17373158 ] Michael Stack commented on HBASE-26036: --- Sweet. Thanks [~Xiaolin Ha] for the explanation. I tried [https://github.com/apache/hbase/pull/3449 |https://github.com/apache/hbase/pull/3449,]Is it supposed to fail? It doesn't for me on linux/mac hbase-2.3. I hacked [#3436|https://github.com/apache/hbase/pull/3436] pr so only the test and support for alternate BYTEBUFF_ALLOCATOR_CLASS and it fails. Nice. > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3445: HBASE-26050 Remove the reflection used in FSUtils.isInSafeMode
Apache-HBase commented on pull request #3445: URL: https://github.com/apache/hbase/pull/3445#issuecomment-872669848 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 21s | master passed | | +1 :green_heart: | compile | 3m 22s | master passed | | +1 :green_heart: | checkstyle | 1m 13s | master passed | | +1 :green_heart: | spotbugs | 2m 15s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 5s | the patch passed | | +1 :green_heart: | compile | 3m 23s | the patch passed | | +1 :green_heart: | javac | 3m 23s | the patch passed | | +1 :green_heart: | checkstyle | 1m 11s | hbase-server: The patch generated 0 new + 53 unchanged - 3 fixed = 53 total (was 56) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 8s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 28s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 52m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3445 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 43ad77121ebd 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ef639ff083 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872666760 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 8m 12s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 55s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | compile | 1m 20s | the patch passed | | +1 :green_heart: | javac | 1m 20s | the patch passed | | +1 :green_heart: | shadedjars | 8m 17s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 37s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 142m 13s | hbase-server in the patch failed. | | | | 174m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 4b02adb6f460 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/testReport/ | | Max. process+thread count | 3146 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872663942 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 9s | master passed | | +1 :green_heart: | compile | 1m 33s | master passed | | +1 :green_heart: | shadedjars | 8m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 13s | the patch passed | | +1 :green_heart: | compile | 1m 30s | the patch passed | | +1 :green_heart: | javac | 1m 30s | the patch passed | | +1 :green_heart: | shadedjars | 8m 11s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 38s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 132m 10s | hbase-server in the patch failed. | | | | 166m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 167b23e883be 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/testReport/ | | Max. process+thread count | 3882 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373145#comment-17373145 ] Zheng Wang edited comment on HBASE-26036 at 7/2/21, 2:01 AM: - HBASE-25187 seems not related to this issue, should be HBASE-25981? [~Xiaolin Ha] was (Author: filtertip): HBASE-25187 seems not related to this issue, should be HBASE-25981? > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373145#comment-17373145 ] Zheng Wang commented on HBASE-26036: HBASE-25187 seems not related to this issue, should be HBASE-25981? > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures
[ https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373144#comment-17373144 ] Wei-Chiu Chuang commented on HBASE-26047: - In addition, the following tests failed too: TestThreadLocalPoolMap TestHeapSize TestSecureExportSnapshot TestMobSecureExportSnapshot TestVerifyReplicationCrossDiffHdfs > [JDK17] Track JDK17 unit test failures > -- > > Key: HBASE-26047 > URL: https://issues.apache.org/jira/browse/HBASE-26047 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Priority: Major > > As of now, there are still two failed unit tests after exporting JDK internal > modules and the modifier access hack. > {noformat} > [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 > s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes Time elapsed: > 0.041 s <<< FAILURE! > java.lang.AssertionError: expected:<160> but was:<152> > at > org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335) > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes Time > elapsed: 0.01 s <<< FAILURE! > java.lang.AssertionError: expected:<72> but was:<64> > at > org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134) > [INFO] Running org.apache.hadoop.hbase.io.Tes > [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 > s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain > [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy Time > elapsed: 0.537 s <<< ERROR! > java.lang.NullPointerException: Cannot enter synchronized block because > "this.closeLock" is null > at > org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119) > {noformat} > It appears that JDK17 makes the heap size estimate different than before. Not > sure why. > TestBufferChain.testWithSpy failure might be because of yet another > unexported module. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26041) Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
[ https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26041: -- Component/s: util > Replace PrintThreadInfoHelper with HBase's own > ReflectionUtils.printThreadInfo() > > > Key: HBASE-26041 > URL: https://issues.apache.org/jira/browse/HBASE-26041 > Project: HBase > Issue Type: Sub-task > Components: util >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > PrintThreadInfoLazyHolder uses reflection to access Hadoop's > ReflectionUtils.printThreadInfo(). Replace it with HBase's > ReflectionUtils.printThreadInfo(). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3442: HBASE-26041 Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
Apache9 merged pull request #3442: URL: https://github.com/apache/hbase/pull/3442 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-26060) Put up 3.0.0-alpha-1RC0
Duo Zhang created HBASE-26060: - Summary: Put up 3.0.0-alpha-1RC0 Key: HBASE-26060 URL: https://issues.apache.org/jira/browse/HBASE-26060 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26027) The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
[ https://issues.apache.org/jira/browse/HBASE-26027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373134#comment-17373134 ] Hudson commented on HBASE-26027: Results for branch branch-2.4 [build #155 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone > caused by ArrayStoreException > - > > Key: HBASE-26027 > URL: https://issues.apache.org/jira/browse/HBASE-26027 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.2.7, 2.3.5, 2.4.4 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 2.5.0, 2.3.6, 2.4.5 > > > The batch api of HTable contains a param named results to store result or > exception, its type is Object[]. > If user pass an array with other type, eg: > org.apache.hadoop.hbase.client.Result, and if we need to put an exception > into it by some reason, then the ArrayStoreException will occur in > AsyncRequestFutureImpl.updateResult, then the > AsyncRequestFutureImpl.decActionCounter will be skipped, then in the > AsyncRequestFutureImpl.waitUntilDone we will stuck at here checking the > actionsInProgress again and again, forever. > It is better to add an cutoff calculated by operationTimeout, instead of only > depend on the value of actionsInProgress. > BTW, this issue only for 2.x, since 3.x the implement has refactored. > How to reproduce: > 1: add sleep in RSRpcServices.multi to mock slow response > {code:java} > try { > Thread.sleep(2000); > } catch (InterruptedException e) { > e.printStackTrace(); > } > {code} > 2: set time out in config > {code:java} > conf.set("hbase.rpc.timeout","2000"); > conf.set("hbase.client.operation.timeout","6000"); > {code} > 3: call batch api > {code:java} > Table table = HbaseUtil.getTable("test"); > byte[] cf = Bytes.toBytes("f"); > byte[] c = Bytes.toBytes("c1"); > List gets = new ArrayList<>(); > for (int i = 0; i < 10; i++) { > byte[] rk = Bytes.toBytes("rk-" + i); > Get get = new Get(rk); > get.addColumn(cf, c); > gets.add(get); > } > Result[] results = new Result[gets.size()]; > table.batch(gets, results); > {code} > The log will looks like below: > {code:java} > [ERROR] [2021/06/22 23:23:00,676] hconnection-0x6b927fb-shared-pool3-t1 - > id=1 error for test processing localhost,16020,1624343786295 > java.lang.ArrayStoreException: org.apache.hadoop.hbase.DoNotRetryIOException > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.updateResult(AsyncRequestFutureImpl.java:1242) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.trySetResultSimple(AsyncRequestFutureImpl.java:1087) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.setError(AsyncRequestFutureImpl.java:1021) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.manageError(AsyncRequestFutureImpl.java:683) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.receiveGlobalFailure(AsyncRequestFutureImpl.java:716) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.access$1500(AsyncRequestFutureImpl.java:69) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncRequestFutureImpl.java:219) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) > at java.util.concurrent.FutureTask.run(FutureTask.java) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at >
[jira] [Commented] (HBASE-26035) Redundant null check in the compareTo function
[ https://issues.apache.org/jira/browse/HBASE-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373135#comment-17373135 ] Hudson commented on HBASE-26035: Results for branch branch-2.4 [build #155 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Redundant null check in the compareTo function > --- > > Key: HBASE-26035 > URL: https://issues.apache.org/jira/browse/HBASE-26035 > Project: HBase > Issue Type: Bug > Components: metrics, Performance >Reporter: Almog Tavor >Assignee: Almog Tavor >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > Remove a redundant null check in the compareTo function > {code:java} > if (!(source instanceof MetricsRegionSourceImpl)) { > return -1; > } > MetricsRegionSourceImpl impl = (MetricsRegionSourceImpl) source; > if (impl == null) { > return -1; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22923) hbase:meta is assigned to localhost when we downgrade the hbase version
[ https://issues.apache.org/jira/browse/HBASE-22923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373133#comment-17373133 ] Hudson commented on HBASE-22923: Results for branch branch-2.4 [build #155 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > hbase:meta is assigned to localhost when we downgrade the hbase version > --- > > Key: HBASE-22923 > URL: https://issues.apache.org/jira/browse/HBASE-22923 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.8 >Reporter: wenbang >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 1.7.1, 2.4.5 > > > When we downgrade the hbase version(rsgroup enable), we found that the > hbase:meta table could not be assigned. > {code:java} > master.AssignmentManager: Failed assignment of hbase:meta,,1.1588230740 to > localhost,1,1, trying to assign elsewhere instead; try=1 of 10 > java.io.IOException: Call to localhost/127.0.0.1:1 failed on local exception: > org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the > failed servers list: localhost/127.0.0.1:1 > {code} > hbase group list: > HBASE_META group(hbase:meta and other system tables) > default group > 1.Down grade all servers in HBASE_META first > 2.higher version servers is in default > 3.hbase:meta assigned to localhost > For system table, we assign them to a server with highest version. > AssignmentManager#getExcludedServersForSystemTable > But did not consider the rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25902) 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is earlier than hbase-2.3.0 first
[ https://issues.apache.org/jira/browse/HBASE-25902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373132#comment-17373132 ] Hudson commented on HBASE-25902: Results for branch branch-2.4 [build #155 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is > earlier than hbase-2.3.0 first > - > > Key: HBASE-25902 > URL: https://issues.apache.org/jira/browse/HBASE-25902 > Project: HBase > Issue Type: Bug > Components: meta, Operability >Affects Versions: 2.3.0, 2.4.0 >Reporter: Michael Stack >Assignee: Viraj Jasani >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: NoSuchColumnFamilyException.png > > > Making note of this issue in case others run into it. At my place of employ, > we tried to upgrade a cluster that was an hbase-1.2.x version to an > hbase-2.3.5 but it failed because meta didn't have the 'table' column family. > Up to 2.3.0, hbase:meta was hardcoded. HBASE-12035 added the 'table' CF for > hbase-2.0.0. HBASE-23782 (2.3.0) undid hardcoding of the hbase:meta schema; > i.e. reading hbase:meta schema from the filesystem. The hbase:meta schema is > only created on initial install. If an upgrade over existing data, the > hbase-1 hbase:meta will not be suitable for hbase-2.3.x context as it will be > missing columnfamilies needed to run (HBASE-23055 made it so hbase:meta could > be altered (2.3.0) but probably of no use since Master won't come up). > It would be a nice-to-have if a user could go from hbase1 to hbase.2.3.0 w/o > having to first install an hbase2 that is earlier than 2.3.0 but needs to be > demand before we would work on it; meantime, install an intermediate hbase2 > version before going to hbase-2.3.0+ if coming from hbase-1.x -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26028) The view as json page shows exception when using TinyLfuBlockCache
[ https://issues.apache.org/jira/browse/HBASE-26028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373131#comment-17373131 ] Hudson commented on HBASE-26028: Results for branch branch-2.4 [build #155 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/155/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The view as json page shows exception when using TinyLfuBlockCache > -- > > Key: HBASE-26028 > URL: https://issues.apache.org/jira/browse/HBASE-26028 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: HBASE-26028-afterpatch.jpg, HBASE-26028-beforepatch.jpg > > > Some variable in TinyLfuBlockCache should be marked as transient. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872628166 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 40s | master passed | | +1 :green_heart: | compile | 3m 41s | master passed | | +1 :green_heart: | checkstyle | 1m 23s | master passed | | +1 :green_heart: | spotbugs | 2m 31s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 39s | the patch passed | | +1 :green_heart: | compile | 3m 34s | the patch passed | | -0 :warning: | javac | 0m 30s | hbase-hadoop-compat generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) | | -0 :warning: | checkstyle | 1m 12s | hbase-server: The patch generated 28 new + 322 unchanged - 0 fixed = 350 total (was 322) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 1s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 50m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux e1205c2c6b55 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/17/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25520) Exclude CHANGES and RELEASENOTES files from source control
[ https://issues.apache.org/jira/browse/HBASE-25520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373108#comment-17373108 ] Duo Zhang commented on HBASE-25520: --- After some investigating, I think we could just use downloads.apache.org and archive.apache.org to place the CHANGES.md and RELEASENOTES.md? We will upload CHANGES.md and RELEASENOTES.md to dist.apache.org. So for the current release, we could see them on downloads.apache.org. https://downloads.apache.org/hbase/2.4.4/CHANGES.md And we will link to previous CHANGES.md in the above CHANGES.md, then we could use archive.apache.org. https://archive.apache.org/dist/hbase/2.3.0/CHANGES.md A possible problem is that, once we want to fix a CHANGES.md for old releases, I do not know if there is a way to change the content on archive.apache.org... Thanks. > Exclude CHANGES and RELEASENOTES files from source control > -- > > Key: HBASE-25520 > URL: https://issues.apache.org/jira/browse/HBASE-25520 > Project: HBase > Issue Type: Task > Components: build, community >Reporter: Nick Dimiduk >Priority: Major > > Over on the thread [Project management (JIRA fix version tracking) is in > crisis > |https://lists.apache.org/thread.html/r8db075fd974de32a90174be2db106ef710cf38cfde9ea9071fda4152%40%3Cdev.hbase.apache.org%3E], > one of the suggested improvements to our process is to remove the CHANGES > and RELEASENOTES files from version control. To accomplish this, we'll need to > * delete existing files from existing branches > * modify the release automation to not commit this files > * find someplace to deliver these files on the web -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26059) Set version as 3.0.0-alpha-1 in master in prep for first RC of 3.0.0-alpha-1
[ https://issues.apache.org/jira/browse/HBASE-26059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26059. --- Hadoop Flags: Reviewed Resolution: Fixed Merged to master. Thanks [~pankajkumar] for reviewing. > Set version as 3.0.0-alpha-1 in master in prep for first RC of 3.0.0-alpha-1 > > > Key: HBASE-26059 > URL: https://issues.apache.org/jira/browse/HBASE-26059 > Project: HBase > Issue Type: Sub-task > Components: build, pom >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26052) Release 3.0.0-alpha-1
[ https://issues.apache.org/jira/browse/HBASE-26052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-26052: - Assignee: Duo Zhang > Release 3.0.0-alpha-1 > - > > Key: HBASE-26052 > URL: https://issues.apache.org/jira/browse/HBASE-26052 > Project: HBase > Issue Type: Umbrella >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3453: HBASE-26059 Set version as 3.0.0-alpha-1 in master in prep for first …
Apache9 merged pull request #3453: URL: https://github.com/apache/hbase/pull/3453 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373103#comment-17373103 ] Michael Stack commented on HBASE-26042: --- Some background notes: On this cluster, the hang is unusual. What we see more often is 'clean' RS aborts like below: {code:java} 2021-06-26 12:18:11,725 ERROR [regionserver/XYZ:16020.logRoller] regionserver.HRegionServer: * ABORTING region server XYZ,16020,1622749552385: IOE in log roller * org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer {code} Looking in DN around this time, its this: {code:java} 2021-06-26 12:18:18,210 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_588272444_1 at /A.B.C.D:28343 [Receiving block BP-1200783253-A.B.C.D1581116871410:blk_3413027507_2339403003]] datanode.DataNode: A.B.C.D:9866:DataXceiver error processing WRITE_BLOCK operation src: /A.B.C.D:28343 dst: /A.B.C.D:9866 java.lang.NullPointerException at sun.nio.ch.EPollArrayWrapper.isEventsHighKilled(EPollArrayWrapper.java:174) {code} (In this particular case the DN restarted: i.e. 'Connection reset by peer' but it doesn't always cause a DN restart) Poking around, it seems NPE when isEventsHighKilled means the DN doesn't have enough fds (and/or JDK bug). It has 16k. Its running about 2k fds when idle. Will try upping the fd count. The DN NPE (or DN abort) looks to cause the RS to abort if it comes up in the RS around log roll. Elsewhere, perhaps, it causes the hang – hard to tell for sure. Here is another example of an abort w/ corresponding DN NPE on isEventsHighKilled complaint. {code:java} 2021-06-25 19:21:12,921 INFO [DataStreamer for file /hbase/data/default/xxx/3ab32c85a5c46f466dc21ecbcff53f6f/.tmp/f/81484a889b0e4c11bfcc71736a023d29] hdfs.DataStreamer: Exception in createBlockOutputStream blk_3406122337_2332497742 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:539) at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1762) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716) 2021-06-25 19:21:12,922 WARN [DataStreamer for file /hbase/data/default/xxx/3ab32c85a5c46f466dc21ecbcff53f6f/.tmp/f/81484a889b0e4c11bfcc71736a023d29] hdfs.DataStreamer: Abandoning BP-1200783253-A.B.C.240-1581116871410:blk_3406122337_2332497742 2021-06-25 19:21:12,924 WARN [DataStreamer for file /hbase/data/default/xxx/3ab32c85a5c46f466dc21ecbcff53f6f/.tmp/f/81484a889b0e4c11bfcc71736a023d29] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[17.58.122.105:9866,DS-1015be7f-b986-4853-b56f-280ac2e8db4a,DISK] 2021-06-25 19:21:12,924 INFO [DataStreamer for file /hbase/data/default/xxx/3ab32c85a5c46f466dc21ecbcff53f6f/.tmp/f/81484a889b0e4c11bfcc71736a023d29] hdfs.DataStreamer: Removing node DatanodeInfoWithStorage[17.58.122.105:9866,DS-1015be7f-b986-4853-b56f-280ac2e8db4a,DISK] from the excluded nodes list 2021-06-25 19:21:13,493 INFO [regionserver/ps1586:16020.leaseChecker] regionserver.RSRpcServices: Scanner -2032258505885651566 lease expired on region xxx,\x9D\xDC\x00\x06Nl\xC5\xDEH\xCB\xF2\xE9\xC9J\x05-,1615374707984.0857141d2bb68aa1acb5f543f2bb78bd. 2021-06-25 19:21:14,189 ERROR [regionserver/ps1586:16020.logRoller] regionserver.HRegionServer: * ABORTING region server ps1586.a.b.c.d,16020,1622767149688: IOE in log roller * java.io.IOException: Connection to 17.58.122.105/17.58.122.105:9866 closed at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.lambda$channelInactive$2(FanOutOneBlockAsyncDFSOutput.java:286) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.failed(FanOutOneBlockAsyncDFSOutput.java:233) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.access$300(FanOutOneBlockAsyncDFSOutput.java:98) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput$AckHandler.channelInactive(FanOutOneBlockAsyncDFSOutput.java:285) {code} > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > Attachments: js1, js2 > > > Making note of issue seen in production cluster. >
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872610913 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | compile | 1m 21s | master passed | | +1 :green_heart: | shadedjars | 8m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 54s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 36s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | +1 :green_heart: | shadedjars | 8m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 36s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 141m 46s | hbase-server in the patch failed. | | | | 174m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 79728bf34d38 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/testReport/ | | Max. process+thread count | 3087 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872608026 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 12s | master passed | | +1 :green_heart: | compile | 1m 32s | master passed | | +1 :green_heart: | shadedjars | 8m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 15s | the patch passed | | +1 :green_heart: | compile | 1m 33s | the patch passed | | +1 :green_heart: | javac | 1m 33s | the patch passed | | +1 :green_heart: | shadedjars | 8m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 38s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 131m 2s | hbase-server in the patch failed. | | | | 165m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0a80df2b60e0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/testReport/ | | Max. process+thread count | 3859 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3453: HBASE-26059 Set version as 3.0.0-alpha-1 in master in prep for first …
Apache-HBase commented on pull request #3453: URL: https://github.com/apache/hbase/pull/3453#issuecomment-872587225 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 18s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 15s | master passed | | +1 :green_heart: | compile | 2m 49s | master passed | | +1 :green_heart: | shadedjars | 8m 55s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 11m 58s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | master passed | | +0 :ok: | mvndep | 4m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 2m 49s | the patch passed | | +1 :green_heart: | javac | 2m 49s | the patch passed | | +1 :green_heart: | shadedjars | 8m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 11m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 385m 26s | root in the patch passed. | | | | 454m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3453 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9f92efd78bda 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 84f9900c99 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/testReport/ | | Max. process+thread count | 3908 (vs. ulimit of 3) | | modules | C: hbase-checkstyle hbase-annotations hbase-build-configuration hbase-logging hbase-protocol-shaded hbase-common hbase-metrics-api hbase-metrics hbase-hadoop-compat hbase-client hbase-zookeeper hbase-replication hbase-balancer hbase-resource-bundle hbase-http hbase-asyncfs hbase-procedure hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-shell hbase-endpoint hbase-backup hbase-it hbase-rest hbase-examples hbase-shaded hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-client-byo-hadoop hbase-shaded/hbase-shaded-mapreduce hbase-external-blockcache hbase-hbtop hbase-assembly hbase-shaded/hbase-shaded-testing-util hbase-shaded/hbase-shaded-testing-util-tester hbase-shaded/hbase-shaded-check-invariants hbase-shaded/hbase-shaded-with-hadoop-check-invariants hbase-archetypes hbase-archetypes/hbase-client-project hbase-archetypes/hbase-shaded-client-project hbase-archetypes/hbase-archetype-builder . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872561490 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 46s | master passed | | +1 :green_heart: | compile | 3m 44s | master passed | | +1 :green_heart: | checkstyle | 1m 24s | master passed | | +1 :green_heart: | spotbugs | 2m 31s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 42s | the patch passed | | +1 :green_heart: | compile | 3m 37s | the patch passed | | -0 :warning: | javac | 0m 29s | hbase-hadoop-compat generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) | | -0 :warning: | checkstyle | 1m 12s | hbase-server: The patch generated 28 new + 322 unchanged - 0 fixed = 350 total (was 322) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 43s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 50m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 9a05da28ea97 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/16/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872532814 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 42s | master passed | | +1 :green_heart: | compile | 1m 21s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 55s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 41s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | +1 :green_heart: | shadedjars | 8m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 141m 23s | hbase-server in the patch failed. | | | | 174m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 950607b6b643 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/testReport/ | | Max. process+thread count | 3140 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872527797 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 1m 33s | master passed | | +1 :green_heart: | shadedjars | 8m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 1s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 17s | the patch passed | | +1 :green_heart: | compile | 1m 32s | the patch passed | | +1 :green_heart: | javac | 1m 32s | the patch passed | | +1 :green_heart: | shadedjars | 8m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 39s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 131m 14s | hbase-server in the patch failed. | | | | 165m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 43e7f846624e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/testReport/ | | Max. process+thread count | 3853 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373028#comment-17373028 ] Michael Stack commented on HBASE-26042: --- [~bharathv] thanks for taking a look. I looked at that stack. I couldn't find in the thread dump what was going to unblock the above park. The last thing I was looking at here was in AsyncFSWAL#syncFailed... I wonder if this a correct reset on faile: {color:#660e7a}highestUnsyncedTxid {color}= {color:#660e7a}highestSyncedTxid{color}.get(); I should probably just work on trying to repro in the small. My attempt above used TestAsyncFSWAL but it uses mocks. > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > Attachments: js1, js2 > > > Making note of issue seen in production cluster. > Node had been struggling under load for a few days with slow syncs up to 10 > seconds, a few STUCK MVCCs from which it recovered and some java pauses up to > three seconds in length. > Then the below happened: > {code:java} > 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] > wal.AsyncFSWAL: sync > failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer {code} > ... and WAL turned dead in the water. Scanners start expiring. RPC prints > text versions of requests complaining requestsTooSlow. Then we start to see > these: > {code:java} > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=552128301, WAL system stuck? {code} > Whats supposed to happen when other side goes away like this is that we will > roll the WAL – go set up a new one. You can see it happening if you run > {code:java} > mvn test > -Dtest=org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL#testBrokenWriter > {code} > I tried hacking the test to repro the above hang by throwing same exception > in above test (on linux because need epoll to repro) but all just worked. > Thread dumps of the hungup WAL subsystem are a little odd. The log roller is > stuck w/o timeout trying to write a long on the WAL header: > > {code:java} > Thread 9464: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, > line=175 (Compiled frame) > - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, > line=1707 (Compiled frame) > - > java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) > @bci=119, line=3323 (Compiled frame) > - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, > line=1742 (Compiled frame) > - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled > frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) > @bci=16, line=189 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(byte[], > org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALHeader) > @bci=9, line=202 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(org.apache.hadoop.fs.FileSystem, > org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, boolean, > long) @bci=107, line=170 (Compiled frame) > - > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(org.apache.hadoop.conf.Configuration, > org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, boolean, long, > org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup, java.lang.Class) > @bci=61, line=113 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=22, line=651 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=2, line=128 (Compiled frame) > - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(boolean) > @bci=101, line=797 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(long) > @bci=18, line=263 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller.run() @bci=198, line=179 > (Compiled frame) {code} > > Other threads are BLOCKED trying
[GitHub] [hbase] Apache-HBase commented on pull request #3453: HBASE-26059 Set version as 3.0.0-alpha-1 in master in prep for first …
Apache-HBase commented on pull request #3453: URL: https://github.com/apache/hbase/pull/3453#issuecomment-872480319 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 36s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 40s | master passed | | +1 :green_heart: | compile | 3m 6s | master passed | | +1 :green_heart: | shadedjars | 8m 24s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 14m 28s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +0 :ok: | mvndep | 4m 35s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 18s | the patch passed | | +1 :green_heart: | compile | 3m 4s | the patch passed | | +1 :green_heart: | javac | 3m 4s | the patch passed | | +1 :green_heart: | shadedjars | 8m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 14m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 174m 36s | root in the patch passed. | | | | 251m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3453 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ede512184d6e 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 84f9900c99 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/testReport/ | | Max. process+thread count | 6808 (vs. ulimit of 3) | | modules | C: hbase-checkstyle hbase-annotations hbase-build-configuration hbase-logging hbase-protocol-shaded hbase-common hbase-metrics-api hbase-metrics hbase-hadoop-compat hbase-client hbase-zookeeper hbase-replication hbase-balancer hbase-resource-bundle hbase-http hbase-asyncfs hbase-procedure hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-shell hbase-endpoint hbase-backup hbase-it hbase-rest hbase-examples hbase-shaded hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-client-byo-hadoop hbase-shaded/hbase-shaded-mapreduce hbase-external-blockcache hbase-hbtop hbase-assembly hbase-shaded/hbase-shaded-testing-util hbase-shaded/hbase-shaded-testing-util-tester hbase-shaded/hbase-shaded-check-invariants hbase-shaded/hbase-shaded-with-hadoop-check-invariants hbase-archetypes hbase-archetypes/hbase-client-project hbase-archetypes/hbase-shaded-client-project hbase-archetypes/hbase-archetype-builder . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3360: HBASE-25975 Row Commit Sequencer
Apache-HBase commented on pull request #3360: URL: https://github.com/apache/hbase/pull/3360#issuecomment-872462785 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 9s | master passed | | +1 :green_heart: | compile | 3m 48s | master passed | | +1 :green_heart: | checkstyle | 1m 29s | master passed | | +1 :green_heart: | spotbugs | 2m 44s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 53s | the patch passed | | +1 :green_heart: | compile | 3m 52s | the patch passed | | -0 :warning: | javac | 0m 33s | hbase-hadoop-compat generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) | | -0 :warning: | checkstyle | 1m 17s | hbase-server: The patch generated 29 new + 322 unchanged - 0 fixed = 351 total (was 322) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 40s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 53m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3360 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux b41a89fabd31 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1c28633fba | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3360/15/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26028) The view as json page shows exception when using TinyLfuBlockCache
[ https://issues.apache.org/jira/browse/HBASE-26028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372987#comment-17372987 ] Hudson commented on HBASE-26028: Results for branch branch-2 [build #290 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} -- Something went wrong with this stage, [check relevant console output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/290//console]. > The view as json page shows exception when using TinyLfuBlockCache > -- > > Key: HBASE-26028 > URL: https://issues.apache.org/jira/browse/HBASE-26028 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: HBASE-26028-afterpatch.jpg, HBASE-26028-beforepatch.jpg > > > Some variable in TinyLfuBlockCache should be marked as transient. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26028) The view as json page shows exception when using TinyLfuBlockCache
[ https://issues.apache.org/jira/browse/HBASE-26028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372966#comment-17372966 ] Hudson commented on HBASE-26028: Results for branch branch-2.3 [build #248 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/248/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/248/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/248/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/248/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/248/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The view as json page shows exception when using TinyLfuBlockCache > -- > > Key: HBASE-26028 > URL: https://issues.apache.org/jira/browse/HBASE-26028 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: HBASE-26028-afterpatch.jpg, HBASE-26028-beforepatch.jpg > > > Some variable in TinyLfuBlockCache should be marked as transient. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] rahulLiving commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
rahulLiving commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872407198 @ndimiduk Resolved the test failure. Can you please approve/merge. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372926#comment-17372926 ] Bharath Vissapragada commented on HBASE-26042: -- Thanks for the jstacks, I think consume is not scheduled further because the sync is broken and that is not clearing up the ring buffer. To me the most suspicious stack is the following hung flush thread. {noformat} Thread 9464: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=175 (Compiled frame) - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, line=1707 (Compiled frame) - java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) @bci=119, line=3323 (Compiled frame) - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, line=1742 (Compiled frame) - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) @bci=16, line=189 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(byte[], org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALHeader) @bci=9, line=202 (Compiled frame) {noformat} Given the state of this thread, I think the new writer instance (from the roll) is also broken for some reason and some of the callbacks (one of which is the future from the above thread) are not cleaned up correctly. I think there is some racy code in FanOutOneBlockAsyncDFSOutput especially around when a new writer is marked 'BROKEN' and a flush is called resulting in some waitingAckQueue members not being cleaned up correctly. Just a theory at this point, but probably easy to poke around with a heap dump or some related logging around these code paths. > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > Attachments: js1, js2 > > > Making note of issue seen in production cluster. > Node had been struggling under load for a few days with slow syncs up to 10 > seconds, a few STUCK MVCCs from which it recovered and some java pauses up to > three seconds in length. > Then the below happened: > {code:java} > 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] > wal.AsyncFSWAL: sync > failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer {code} > ... and WAL turned dead in the water. Scanners start expiring. RPC prints > text versions of requests complaining requestsTooSlow. Then we start to see > these: > {code:java} > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=552128301, WAL system stuck? {code} > Whats supposed to happen when other side goes away like this is that we will > roll the WAL – go set up a new one. You can see it happening if you run > {code:java} > mvn test > -Dtest=org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL#testBrokenWriter > {code} > I tried hacking the test to repro the above hang by throwing same exception > in above test (on linux because need epoll to repro) but all just worked. > Thread dumps of the hungup WAL subsystem are a little odd. The log roller is > stuck w/o timeout trying to write a long on the WAL header: > > {code:java} > Thread 9464: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, > line=175 (Compiled frame) > - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, > line=1707 (Compiled frame) > - > java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) > @bci=119, line=3323 (Compiled frame) > - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, > line=1742 (Compiled frame) > - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled > frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) > @bci=16, line=189 (Compiled frame) > - >
[GitHub] [hbase] nyl3532016 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r662427040 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,483 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.RejectedExecutionHandler; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.PressureAwareCompactionThroughputController; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.StealJobQueue; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG = LoggerFactory.getLogger(CompactionThreadManager.class); + // Configuration key for the large compaction threads. + private final static String LARGE_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.large"; + private final static int LARGE_COMPACTION_THREADS_DEFAULT = 10; + // Configuration key for the small compaction threads. + private final static String SMALL_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.small"; + private final static int SMALL_COMPACTION_THREADS_DEFAULT = 50; private final Configuration conf; private final ConcurrentMap rsAdmins = new ConcurrentHashMap<>(); private final HCompactionServer server; + private HFileSystem fs; + private Path rootDir; + private FSTableDescriptors tableDescriptors; + // compaction pools + private volatile ThreadPoolExecutor longCompactions; + private volatile ThreadPoolExecutor shortCompactions; + private ConcurrentHashMap runningCompactionTasks = + new ConcurrentHashMap<>(); + private PressureAwareCompactionThroughputController throughputController; + private CompactionServerStorage storage = new CompactionServerStorage(); - public CompactionThreadManager(final Configuration conf, HCompactionServer server) { + CompactionThreadManager(final Configuration conf, HCompactionServer server) { this.conf = conf; this.server = server; +try { + this.fs = new HFileSystem(this.conf, true); + this.rootDir = CommonFSUtils.getRootDir(this.conf); + this.tableDescriptors = new FSTableDescriptors(conf); + // start compaction resources + this.throughputController = new PressureAwareCompactionThroughputController(); + this.throughputController.setConf(conf); + this.throughputController.setup(this); + startCompactionPool(); +} catch (Throwable t) { +
[GitHub] [hbase] nyl3532016 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r662421646 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,441 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.function.BiConsumer; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.CompactThreadControl; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; + import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; + +/** + * CompactionThreadManager reuse {@link HStore#selectCompaction}, {@link HStore#throttleCompaction}, + * {@link CompactionContext#compact}, {@link CompactThreadControl}, which are core logic of + * compaction. + */ @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG = LoggerFactory.getLogger(CompactionThreadManager.class); + // Configuration key for the large compaction threads. + private final static String LARGE_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.large"; + private final static int LARGE_COMPACTION_THREADS_DEFAULT = 10; + // Configuration key for the small compaction threads. + private final static String SMALL_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.small"; + private final static int SMALL_COMPACTION_THREADS_DEFAULT = 50; private final Configuration conf; - private final ConcurrentMap rsAdmins = - new ConcurrentHashMap<>(); private final HCompactionServer server; + private HFileSystem fs; + private Path rootDir; + private FSTableDescriptors tableDescriptors; + private CompactThreadControl compactThreadControl; + private ConcurrentHashMap runningCompactionTasks = + new ConcurrentHashMap<>(); + private static CompactionServerStorage storage = new CompactionServerStorage(); - public CompactionThreadManager(final Configuration conf, HCompactionServer server) { + CompactionThreadManager(final Configuration conf, HCompactionServer server) { this.conf = conf; this.server = server; +try { + this.fs = new HFileSystem(this.conf, true); + this.rootDir = CommonFSUtils.getRootDir(this.conf); + this.tableDescriptors = new FSTableDescriptors(conf); + int largeThreads = + Math.max(1, conf.getInt(LARGE_COMPACTION_THREADS, LARGE_COMPACTION_THREADS_DEFAULT)); + int smallThreads = conf.getInt(SMALL_COMPACTION_THREADS, SMALL_COMPACTION_THREADS_DEFAULT); + compactThreadControl = new CompactThreadControl(this, largeThreads, smallThreads, + COMPACTION_TASK_COMPARATOR, REJECTION); +} catch (Throwable t) { + LOG.error("Failed construction
[GitHub] [hbase] nyl3532016 commented on pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-872368228 > Now I have another question. Seems we maintain the compacting files in compaction server. Keep compacting files is useful if you want to run multiple compactions on the same store. But how do we make sure that we always send the compaction request of a store to the same compaction server? If we can not guarantee, then keeping compacting files in compaction server is useless... We keep the region to CS mapping in Master(CompactionOffloadManager#selectCompactionServer),Now just use the hash code of region start key mod the compactionserver size (_hashcode(region.startkey) % compactionServerList.size()_) as index to mapping the compaction sever, it works well. maybe later we need an assignment module for compaction offload. And this mapping is not strictly required, If compactionServer changed from CS1 to CS2. Now we select file on CS2 may select the file alreay selected on CS1, So one of the compaction job must be failed when report to RS(RS will check if select file whether exsit). We need to ensure that the cache(compacting files in CompactionServerStorage) will be cleared once compaction job failed or crash. So keeping compacting files in compaction server will simplify the situation of compactionServer crash. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3453: HBASE-26059 Set version as 3.0.0-alpha-1 in master in prep for first …
Apache-HBase commented on pull request #3453: URL: https://github.com/apache/hbase/pull/3453#issuecomment-872367700 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 50s | master passed | | +1 :green_heart: | compile | 8m 15s | master passed | | +1 :green_heart: | checkstyle | 1m 59s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 38s | master passed | | +0 :ok: | mvndep | 4m 2s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 42s | the patch passed | | +1 :green_heart: | compile | 8m 21s | the patch passed | | +1 :green_heart: | javac | 8m 21s | the patch passed | | +1 :green_heart: | checkstyle | 1m 56s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 55s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 18m 1s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 8m 43s | The patch does not generate ASF License warnings. | | | | 73m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3453 | | Optional Tests | dupname asflicense javac hadoopcheck xml compile checkstyle | | uname | Linux e69400d33ed6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 84f9900c99 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 141 (vs. ulimit of 3) | | modules | C: hbase-checkstyle hbase-annotations hbase-build-configuration hbase-logging hbase-protocol-shaded hbase-common hbase-metrics-api hbase-metrics hbase-hadoop-compat hbase-client hbase-zookeeper hbase-replication hbase-balancer hbase-resource-bundle hbase-http hbase-asyncfs hbase-procedure hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-shell hbase-endpoint hbase-backup hbase-it hbase-rest hbase-examples hbase-shaded hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-client-byo-hadoop hbase-shaded/hbase-shaded-mapreduce hbase-external-blockcache hbase-hbtop hbase-assembly hbase-shaded/hbase-shaded-testing-util hbase-shaded/hbase-shaded-testing-util-tester hbase-shaded/hbase-shaded-check-invariants hbase-shaded/hbase-shaded-with-hadoop-check-invariants hbase-archetypes hbase-archetypes/hbase-client-project hbase-archetypes/hbase-shaded-client-project hbase-archetypes/hbase-archetype-builder . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3453/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3452: HBASE-26058 Add TableDescriptor attribute 'COMPACTION_OFFLOAD_ENABLED'
Apache-HBase commented on pull request #3452: URL: https://github.com/apache/hbase/pull/3452#issuecomment-872360902 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 12s | HBASE-25714 passed | | +1 :green_heart: | compile | 1m 49s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 8m 31s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 12s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 0s | the patch passed | | +1 :green_heart: | compile | 1m 50s | the patch passed | | +1 :green_heart: | javac | 1m 50s | the patch passed | | +1 :green_heart: | shadedjars | 8m 28s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 20s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 215m 9s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 7m 5s | hbase-shell in the patch passed. | | | | 258m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3452 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 14c6170c71dc 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/testReport/ | | Max. process+thread count | 3335 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3452: HBASE-26058 Add TableDescriptor attribute 'COMPACTION_OFFLOAD_ENABLED'
Apache-HBase commented on pull request #3452: URL: https://github.com/apache/hbase/pull/3452#issuecomment-872358878 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 57s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 20s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 8m 38s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 21s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 50s | the patch passed | | +1 :green_heart: | compile | 2m 10s | the patch passed | | +1 :green_heart: | javac | 2m 10s | the patch passed | | +1 :green_heart: | shadedjars | 8m 42s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 19s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 37s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 208m 38s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 7m 4s | hbase-shell in the patch passed. | | | | 255m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3452 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e5b979604832 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/testReport/ | | Max. process+thread count | 3037 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3452/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-22923) hbase:meta is assigned to localhost when we downgrade the hbase version
[ https://issues.apache.org/jira/browse/HBASE-22923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372892#comment-17372892 ] Hudson commented on HBASE-22923: Results for branch branch-1 [build #142 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/142/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/142//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/142//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/142//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > hbase:meta is assigned to localhost when we downgrade the hbase version > --- > > Key: HBASE-22923 > URL: https://issues.apache.org/jira/browse/HBASE-22923 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.8 >Reporter: wenbang >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 1.7.1, 2.4.5 > > > When we downgrade the hbase version(rsgroup enable), we found that the > hbase:meta table could not be assigned. > {code:java} > master.AssignmentManager: Failed assignment of hbase:meta,,1.1588230740 to > localhost,1,1, trying to assign elsewhere instead; try=1 of 10 > java.io.IOException: Call to localhost/127.0.0.1:1 failed on local exception: > org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the > failed servers list: localhost/127.0.0.1:1 > {code} > hbase group list: > HBASE_META group(hbase:meta and other system tables) > default group > 1.Down grade all servers in HBASE_META first > 2.higher version servers is in default > 3.hbase:meta assigned to localhost > For system table, we assign them to a server with highest version. > AssignmentManager#getExcludedServersForSystemTable > But did not consider the rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] nyl3532016 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r662392942 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,483 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.RejectedExecutionHandler; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.PressureAwareCompactionThroughputController; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.StealJobQueue; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG = LoggerFactory.getLogger(CompactionThreadManager.class); + // Configuration key for the large compaction threads. + private final static String LARGE_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.large"; + private final static int LARGE_COMPACTION_THREADS_DEFAULT = 10; + // Configuration key for the small compaction threads. + private final static String SMALL_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.small"; + private final static int SMALL_COMPACTION_THREADS_DEFAULT = 50; private final Configuration conf; private final ConcurrentMap rsAdmins = new ConcurrentHashMap<>(); private final HCompactionServer server; + private HFileSystem fs; + private Path rootDir; + private FSTableDescriptors tableDescriptors; + // compaction pools + private volatile ThreadPoolExecutor longCompactions; + private volatile ThreadPoolExecutor shortCompactions; + private ConcurrentHashMap runningCompactionTasks = + new ConcurrentHashMap<>(); + private PressureAwareCompactionThroughputController throughputController; + private CompactionServerStorage storage = new CompactionServerStorage(); - public CompactionThreadManager(final Configuration conf, HCompactionServer server) { + CompactionThreadManager(final Configuration conf, HCompactionServer server) { this.conf = conf; this.server = server; +try { + this.fs = new HFileSystem(this.conf, true); + this.rootDir = CommonFSUtils.getRootDir(this.conf); + this.tableDescriptors = new FSTableDescriptors(conf); + // start compaction resources + this.throughputController = new PressureAwareCompactionThroughputController(); + this.throughputController.setConf(conf); + this.throughputController.setup(this); + startCompactionPool(); +} catch (Throwable t) { +
[GitHub] [hbase] Apache-HBase commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
Apache-HBase commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872339879 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | branch-2 passed | | +1 :green_heart: | compile | 0m 59s | branch-2 passed | | +1 :green_heart: | shadedjars | 7m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 12s | the patch passed | | +1 :green_heart: | compile | 0m 57s | the patch passed | | +1 :green_heart: | javac | 0m 57s | the patch passed | | +1 :green_heart: | shadedjars | 7m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 7m 41s | hbase-shell in the patch passed. | | | | 39m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3450 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2590e1b59201 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 1d6eb77ef8 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/3/testReport/ | | Max. process+thread count | 2686 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372879#comment-17372879 ] Xiaolin Ha edited comment on HBASE-26036 at 7/1/21, 3:23 PM: - Hi, [~stack], thanks for this question. The original purpose of BYTEBUFF_ALLOCATOR_CLASS is to simplify rewrite of released DBBs, without it the problem can also be reproduced, I have made another UT in [https://github.com/apache/hbase/pull/3449,] please take a look. In the ByteBuffAllocator, it uses DBBs in the pool priority over allocating new ones. Once a DBB is released and deallocated, it will be put to the ByteBuff pool. The test case process is that, # checkAndMutate performs getting cells, the ByteBuffAllocator allocate a new DBB for the read table block, but the HRegion.get() releases the DBB inner the method before the cell value is checked; # after get, wait a little while, another thread will deallocate the DBB it used and put back to the pool; # some other get/scan calls use the pooled DBBs to read other table blocks; # the checkAndMutate get the cell timestamp and check the cell value, but the content of the cell referred DBB was changed; I think the customized BYTEBUFF_ALLOCATOR_CLASS is very useful for debugging such problems, it can help to reduce some rewrite works of the program and simplify the test codes. Thanks. was (Author: xiaolin ha): Hi, [~stack], thanks for this question. The original purpose of BYTEBUFF_ALLOCATOR_CLASS is to simplify rewrite of released DBBs, without it the problem can also be reproduced, I have made another UT in [https://github.com/apache/hbase/pull/3449,] please take a look. In the ByteBuffAllocator, it uses DBBs in the pool priority over creating new ones. Once a DBB is released and deallocated, it will be put to the ByteBuff pool. The test case process is that, # checkAndMutate performs getting cells, the ByteBuffAllocator allocate a new DBB for the read table block, but the HRegion.get() releases the DBB inner the method before the cell value is checked; # after get, wait a little while, another thread will deallocate the DBB it used and put back to the pool; # some other get/scan calls use the pooled DBBs to read other table blocks; # the checkAndMutate get the cell timestamp and check the cell value, but the content of the cell referred DBB was changed; I think the customized BYTEBUFF_ALLOCATOR_CLASS is very useful for debugging such problems, it can help to reduce some rewrite works of the program and simplify the test codes. Thanks. > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @
[GitHub] [hbase] Apache-HBase commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
Apache-HBase commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872337456 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26036) DBB released too early in HRegion.get() and dirty data for some operations
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372879#comment-17372879 ] Xiaolin Ha commented on HBASE-26036: Hi, [~stack], thanks for this question. The original purpose of BYTEBUFF_ALLOCATOR_CLASS is to simplify rewrite of released DBBs, without it the problem can also be reproduced, I have made another UT in [https://github.com/apache/hbase/pull/3449,] please take a look. In the ByteBuffAllocator, it uses DBBs in the pool priority over creating new ones. Once a DBB is released and deallocated, it will be put to the ByteBuff pool. The test case process is that, # checkAndMutate performs getting cells, the ByteBuffAllocator allocate a new DBB for the read table block, but the HRegion.get() releases the DBB inner the method before the cell value is checked; # after get, wait a little while, another thread will deallocate the DBB it used and put back to the pool; # some other get/scan calls use the pooled DBBs to read other table blocks; # the checkAndMutate get the cell timestamp and check the cell value, but the content of the cell referred DBB was changed; I think the customized BYTEBUFF_ALLOCATOR_CLASS is very useful for debugging such problems, it can help to reduce some rewrite works of the program and simplify the test codes. Thanks. > DBB released too early in HRegion.get() and dirty data for some operations > -- > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
Apache9 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r662375310 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,441 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.function.BiConsumer; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.CompactThreadControl; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; + import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; + +/** + * CompactionThreadManager reuse {@link HStore#selectCompaction}, {@link HStore#throttleCompaction}, + * {@link CompactionContext#compact}, {@link CompactThreadControl}, which are core logic of + * compaction. + */ @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG = LoggerFactory.getLogger(CompactionThreadManager.class); + // Configuration key for the large compaction threads. + private final static String LARGE_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.large"; + private final static int LARGE_COMPACTION_THREADS_DEFAULT = 10; + // Configuration key for the small compaction threads. + private final static String SMALL_COMPACTION_THREADS = + "hbase.compaction.server.thread.compaction.small"; + private final static int SMALL_COMPACTION_THREADS_DEFAULT = 50; private final Configuration conf; - private final ConcurrentMap rsAdmins = - new ConcurrentHashMap<>(); private final HCompactionServer server; + private HFileSystem fs; + private Path rootDir; + private FSTableDescriptors tableDescriptors; + private CompactThreadControl compactThreadControl; + private ConcurrentHashMap runningCompactionTasks = + new ConcurrentHashMap<>(); + private static CompactionServerStorage storage = new CompactionServerStorage(); - public CompactionThreadManager(final Configuration conf, HCompactionServer server) { + CompactionThreadManager(final Configuration conf, HCompactionServer server) { this.conf = conf; this.server = server; +try { + this.fs = new HFileSystem(this.conf, true); + this.rootDir = CommonFSUtils.getRootDir(this.conf); + this.tableDescriptors = new FSTableDescriptors(conf); + int largeThreads = + Math.max(1, conf.getInt(LARGE_COMPACTION_THREADS, LARGE_COMPACTION_THREADS_DEFAULT)); + int smallThreads = conf.getInt(SMALL_COMPACTION_THREADS, SMALL_COMPACTION_THREADS_DEFAULT); + compactThreadControl = new CompactThreadControl(this, largeThreads, smallThreads, + COMPACTION_TASK_COMPARATOR, REJECTION); +} catch (Throwable t) { + LOG.error("Failed construction
[jira] [Commented] (HBASE-26018) Perf improvement in L1 cache
[ https://issues.apache.org/jira/browse/HBASE-26018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372846#comment-17372846 ] Viraj Jasani commented on HBASE-26018: -- [~stack] [~anoop.hbase] Please take a look when you get time. > Perf improvement in L1 cache > > > Key: HBASE-26018 > URL: https://issues.apache.org/jira/browse/HBASE-26018 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.3.5, 2.4.4 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > Attachments: computeIfPresent.png > > > After HBASE-25698 is in, all L1 caching strategies perform buffer.retain() in > order to maintain refCount atomically while retrieving cached blocks > (CHM#computeIfPresent). Retaining refCount is coming up bit expensive in > terms of performance. Using computeIfPresent API, CHM uses coarse grained > segment locking and even if our computation is not so complex (we just call > block retain API), it will block other update APIs for the keys within bucket > that is locked. computeIfPresent keeps showing up on flame graphs as well > (attached one of them). Specifically when we see aggressive cache hits for > meta blocks (with major blocks in cache), we would want to get away from > coarse grained locking. > One of the suggestions came up while reviewing PR#3215 is to treat cache read > API as optimistic read and deal with block retain based refCount issues by > catching the respective Exception and let it treat as cache miss. This should > allow us to go ahead with lockless get API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26035) Redundant null check in the compareTo function
[ https://issues.apache.org/jira/browse/HBASE-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26035. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.3+. Thanks [~almogtavor] for contributing. > Redundant null check in the compareTo function > --- > > Key: HBASE-26035 > URL: https://issues.apache.org/jira/browse/HBASE-26035 > Project: HBase > Issue Type: Bug > Components: metrics, Performance >Reporter: Almog Tavor >Assignee: Almog Tavor >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > Remove a redundant null check in the compareTo function > {code:java} > if (!(source instanceof MetricsRegionSourceImpl)) { > return -1; > } > MetricsRegionSourceImpl impl = (MetricsRegionSourceImpl) source; > if (impl == null) { > return -1; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26035) Redundant null check in the compareTo function
[ https://issues.apache.org/jira/browse/HBASE-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26035: -- Fix Version/s: 2.4.5 2.3.6 2.5.0 3.0.0-alpha-1 > Redundant null check in the compareTo function > --- > > Key: HBASE-26035 > URL: https://issues.apache.org/jira/browse/HBASE-26035 > Project: HBase > Issue Type: Bug > Components: metrics, Performance >Reporter: Almog Tavor >Assignee: Almog Tavor >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > Remove a redundant null check in the compareTo function > {code:java} > if (!(source instanceof MetricsRegionSourceImpl)) { > return -1; > } > MetricsRegionSourceImpl impl = (MetricsRegionSourceImpl) source; > if (impl == null) { > return -1; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26035) Redundant null check in the compareTo function
[ https://issues.apache.org/jira/browse/HBASE-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-26035: - Assignee: Almog Tavor > Redundant null check in the compareTo function > --- > > Key: HBASE-26035 > URL: https://issues.apache.org/jira/browse/HBASE-26035 > Project: HBase > Issue Type: Bug > Components: metrics, Performance >Reporter: Almog Tavor >Assignee: Almog Tavor >Priority: Minor > > Remove a redundant null check in the compareTo function > {code:java} > if (!(source instanceof MetricsRegionSourceImpl)) { > return -1; > } > MetricsRegionSourceImpl impl = (MetricsRegionSourceImpl) source; > if (impl == null) { > return -1; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3433: HBASE-26035 Redundant null check in the compareTo function
Apache9 merged pull request #3433: URL: https://github.com/apache/hbase/pull/3433 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26035) Redundant null check in the compareTo function
[ https://issues.apache.org/jira/browse/HBASE-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26035: -- Component/s: Performance metrics > Redundant null check in the compareTo function > --- > > Key: HBASE-26035 > URL: https://issues.apache.org/jira/browse/HBASE-26035 > Project: HBase > Issue Type: Bug > Components: metrics, Performance >Reporter: Almog Tavor >Priority: Minor > > Remove a redundant null check in the compareTo function > {code:java} > if (!(source instanceof MetricsRegionSourceImpl)) { > return -1; > } > MetricsRegionSourceImpl impl = (MetricsRegionSourceImpl) source; > if (impl == null) { > return -1; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
Apache9 commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872313679 In general I agree with you that '-1' should be considered as never expire, but this is a behavior change right? Maybe some users already rely on this behavior, this change will break their usage... So I'm not sure whether we should change this behavior... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 opened a new pull request #3453: HBASE-26059 Set version as 3.0.0-alpha-1 in master in prep for first …
Apache9 opened a new pull request #3453: URL: https://github.com/apache/hbase/pull/3453 …RC of 3.0.0-alpha-1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-26059) Set version as 3.0.0-alpha-1 in master in prep for first RC of 3.0.0-alpha-1
Duo Zhang created HBASE-26059: - Summary: Set version as 3.0.0-alpha-1 in master in prep for first RC of 3.0.0-alpha-1 Key: HBASE-26059 URL: https://issues.apache.org/jira/browse/HBASE-26059 Project: HBase Issue Type: Sub-task Components: build, pom Reporter: Duo Zhang Assignee: Duo Zhang Fix For: 3.0.0-alpha-1 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
Apache-HBase commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872289349 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 10s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 15s | branch-2 passed | | +1 :green_heart: | compile | 0m 54s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 47s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 46s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | +1 :green_heart: | shadedjars | 6m 48s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 41s | hbase-client in the patch passed. | | -1 :x: | unit | 6m 57s | hbase-shell in the patch failed. | | | | 38m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3450 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux db8b8a46ca41 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 1d6eb77ef8 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-shell.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/testReport/ | | Max. process+thread count | 2474 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3445: HBASE-26050 Remove the reflection used in FSUtils.isInSafeMode
Apache-HBase commented on pull request #3445: URL: https://github.com/apache/hbase/pull/3445#issuecomment-872289250 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 31s | master passed | | +1 :green_heart: | compile | 1m 31s | master passed | | +1 :green_heart: | shadedjars | 9m 58s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 50s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 14s | the patch passed | | +1 :green_heart: | compile | 1m 30s | the patch passed | | +1 :green_heart: | javac | 1m 30s | the patch passed | | +1 :green_heart: | shadedjars | 9m 49s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 48s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 198m 26s | hbase-server in the patch failed. | | | | 236m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3445 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6ca68ff214bf 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4c7da496ad | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/testReport/ | | Max. process+thread count | 2453 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
Apache-HBase commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872287016 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 24s | branch-2 passed | | +1 :green_heart: | compile | 0m 49s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | +1 :green_heart: | compile | 0m 50s | the patch passed | | +1 :green_heart: | javac | 0m 50s | the patch passed | | +1 :green_heart: | shadedjars | 6m 0s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 38s | hbase-client in the patch passed. | | -1 :x: | unit | 7m 54s | hbase-shell in the patch failed. | | | | 35m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3450 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 63f6d43dfef9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 1d6eb77ef8 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-shell.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/testReport/ | | Max. process+thread count | 2298 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3450: HBASE-25651 NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
Apache-HBase commented on pull request #3450: URL: https://github.com/apache/hbase/pull/3450#issuecomment-872286434 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 18s | branch-2 passed | | +1 :green_heart: | compile | 1m 28s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 45s | branch-2 passed | | +1 :green_heart: | spotbugs | 1m 8s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 17s | the patch passed | | +1 :green_heart: | compile | 1m 26s | the patch passed | | +1 :green_heart: | javac | 1m 26s | the patch passed | | +1 :green_heart: | checkstyle | 0m 44s | the patch passed | | -0 :warning: | rubocop | 0m 15s | The patch generated 7 new + 610 unchanged - 5 fixed = 617 total (was 615) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 29s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 1m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 34m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3450 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile rubocop | | uname | Linux 07f645cd21cb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 1d6eb77ef8 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 95 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3450/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25729) Upgrade to latest hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-25729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25729: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > Upgrade to latest hbase-thirdparty > -- > > Key: HBASE-25729 > URL: https://issues.apache.org/jira/browse/HBASE-25729 > Project: HBase > Issue Type: Sub-task > Components: build, thirdparty >Affects Versions: 2.4.2 >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25967) The readRequestsCount does not calculate when the outResults is empty
[ https://issues.apache.org/jira/browse/HBASE-25967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25967: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > The readRequestsCount does not calculate when the outResults is empty > - > > Key: HBASE-25967 > URL: https://issues.apache.org/jira/browse/HBASE-25967 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > This metric is about request, so should not depend on the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26002) MultiRowMutationEndpoint should return the result of the conditional update
[ https://issues.apache.org/jira/browse/HBASE-26002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26002: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > MultiRowMutationEndpoint should return the result of the conditional update > --- > > Key: HBASE-26002 > URL: https://issues.apache.org/jira/browse/HBASE-26002 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > > HBASE-25703 added support for the conditional update in > MultiRowMutationEndpoint, but MultiRowMutationEndpoint doesn't return the > result of the conditional update (whether or not the mutations are executed). > In this Jira, we will make MultiRowMutationEndpoint return the result of the > conditional update. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25902) 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is earlier than hbase-2.3.0 first
[ https://issues.apache.org/jira/browse/HBASE-25902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25902: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is > earlier than hbase-2.3.0 first > - > > Key: HBASE-25902 > URL: https://issues.apache.org/jira/browse/HBASE-25902 > Project: HBase > Issue Type: Bug > Components: meta, Operability >Affects Versions: 2.3.0, 2.4.0 >Reporter: Michael Stack >Assignee: Viraj Jasani >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: NoSuchColumnFamilyException.png > > > Making note of this issue in case others run into it. At my place of employ, > we tried to upgrade a cluster that was an hbase-1.2.x version to an > hbase-2.3.5 but it failed because meta didn't have the 'table' column family. > Up to 2.3.0, hbase:meta was hardcoded. HBASE-12035 added the 'table' CF for > hbase-2.0.0. HBASE-23782 (2.3.0) undid hardcoding of the hbase:meta schema; > i.e. reading hbase:meta schema from the filesystem. The hbase:meta schema is > only created on initial install. If an upgrade over existing data, the > hbase-1 hbase:meta will not be suitable for hbase-2.3.x context as it will be > missing columnfamilies needed to run (HBASE-23055 made it so hbase:meta could > be altered (2.3.0) but probably of no use since Master won't come up). > It would be a nice-to-have if a user could go from hbase1 to hbase.2.3.0 w/o > having to first install an hbase2 that is earlier than 2.3.0 but needs to be > demand before we would work on it; meantime, install an intermediate hbase2 > version before going to hbase-2.3.0+ if coming from hbase-1.x -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26028) The view as json page shows exception when using TinyLfuBlockCache
[ https://issues.apache.org/jira/browse/HBASE-26028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26028: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > The view as json page shows exception when using TinyLfuBlockCache > -- > > Key: HBASE-26028 > URL: https://issues.apache.org/jira/browse/HBASE-26028 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: HBASE-26028-afterpatch.jpg, HBASE-26028-beforepatch.jpg > > > Some variable in TinyLfuBlockCache should be marked as transient. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-22923) hbase:meta is assigned to localhost when we downgrade the hbase version
[ https://issues.apache.org/jira/browse/HBASE-22923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22923: -- Fix Version/s: (was: 3.0.0-alpha-2) 3.0.0-alpha-1 > hbase:meta is assigned to localhost when we downgrade the hbase version > --- > > Key: HBASE-22923 > URL: https://issues.apache.org/jira/browse/HBASE-22923 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.8 >Reporter: wenbang >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 1.7.1, 2.4.5 > > > When we downgrade the hbase version(rsgroup enable), we found that the > hbase:meta table could not be assigned. > {code:java} > master.AssignmentManager: Failed assignment of hbase:meta,,1.1588230740 to > localhost,1,1, trying to assign elsewhere instead; try=1 of 10 > java.io.IOException: Call to localhost/127.0.0.1:1 failed on local exception: > org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the > failed servers list: localhost/127.0.0.1:1 > {code} > hbase group list: > HBASE_META group(hbase:meta and other system tables) > default group > 1.Down grade all servers in HBASE_META first > 2.higher version servers is in default > 3.hbase:meta assigned to localhost > For system table, we assign them to a server with highest version. > AssignmentManager#getExcludedServersForSystemTable > But did not consider the rsgroup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26053) Remove CHANGES and RELEASENOTES
[ https://issues.apache.org/jira/browse/HBASE-26053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26053: -- Fix Version/s: (was: 3.0.0-alpha-1) > Remove CHANGES and RELEASENOTES > --- > > Key: HBASE-26053 > URL: https://issues.apache.org/jira/browse/HBASE-26053 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Priority: Major > > Let's do this first. > The release script has a flag to skip the generating stage so it could be > done later. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26053) Remove CHANGES and RELEASENOTES
[ https://issues.apache.org/jira/browse/HBASE-26053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-26053: - Assignee: (was: Duo Zhang) > Remove CHANGES and RELEASENOTES > --- > > Key: HBASE-26053 > URL: https://issues.apache.org/jira/browse/HBASE-26053 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Let's do this first. > The release script has a flag to skip the generating stage so it could be > done later. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26053) Remove CHANGES and RELEASENOTES
[ https://issues.apache.org/jira/browse/HBASE-26053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372806#comment-17372806 ] Duo Zhang commented on HBASE-26053: --- Oh there is no CHANGES and RELEASENOTES on master branch so not a problem for releasing 3.0.0-alpha-1 then... > Remove CHANGES and RELEASENOTES > --- > > Key: HBASE-26053 > URL: https://issues.apache.org/jira/browse/HBASE-26053 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Let's do this first. > The release script has a flag to skip the generating stage so it could be > done later. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26053) Remove CHANGES and RELEASENOTES
[ https://issues.apache.org/jira/browse/HBASE-26053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-26053: - Assignee: Duo Zhang > Remove CHANGES and RELEASENOTES > --- > > Key: HBASE-26053 > URL: https://issues.apache.org/jira/browse/HBASE-26053 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Let's do this first. > The release script has a flag to skip the generating stage so it could be > done later. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
Apache-HBase commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872260193 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 8s | master passed | | +1 :green_heart: | compile | 1m 5s | master passed | | +1 :green_heart: | shadedjars | 8m 22s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 56s | the patch passed | | +1 :green_heart: | compile | 1m 5s | the patch passed | | +1 :green_heart: | javac | 1m 5s | the patch passed | | +1 :green_heart: | shadedjars | 8m 54s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 132m 5s | hbase-server in the patch passed. | | | | 163m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3451 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 999358c6f369 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 84f9900c99 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/testReport/ | | Max. process+thread count | 3232 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3451: HBASE-26056 Cell TTL set to -1 should mean never expire, the same as CF TTL
Apache-HBase commented on pull request #3451: URL: https://github.com/apache/hbase/pull/3451#issuecomment-872256021 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | master passed | | +1 :green_heart: | compile | 1m 24s | master passed | | +1 :green_heart: | shadedjars | 8m 50s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | compile | 1m 11s | the patch passed | | +1 :green_heart: | javac | 1m 11s | the patch passed | | +1 :green_heart: | shadedjars | 8m 14s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 125m 19s | hbase-server in the patch passed. | | | | 158m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3451 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 878118e45271 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 84f9900c99 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/testReport/ | | Max. process+thread count | 4006 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3451/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3445: HBASE-26050 Remove the reflection used in FSUtils.isInSafeMode
Apache-HBase commented on pull request #3445: URL: https://github.com/apache/hbase/pull/3445#issuecomment-872246533 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 59s | master passed | | +1 :green_heart: | compile | 1m 1s | master passed | | +1 :green_heart: | shadedjars | 9m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 56s | the patch passed | | +1 :green_heart: | compile | 1m 16s | the patch passed | | +1 :green_heart: | javac | 1m 16s | the patch passed | | +1 :green_heart: | shadedjars | 11m 27s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 45s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 147m 47s | hbase-server in the patch failed. | | | | 184m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3445 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ca4e9b6c13b7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4c7da496ad | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/testReport/ | | Max. process+thread count | 3592 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3445/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3426: HBASE-26032 Make HRegion.getStores() an O(1) operation
Apache-HBase commented on pull request #3426: URL: https://github.com/apache/hbase/pull/3426#issuecomment-872224769 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 44s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 47s | master passed | | +1 :green_heart: | compile | 1m 43s | master passed | | +1 :green_heart: | shadedjars | 11m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 9s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 52s | the patch passed | | +1 :green_heart: | compile | 1m 45s | the patch passed | | +1 :green_heart: | javac | 1m 45s | the patch passed | | +1 :green_heart: | shadedjars | 10m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 135m 44s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 2m 0s | hbase-examples in the patch passed. | | | | 179m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3426 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f5e3c7d82e18 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4c7da496ad | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/testReport/ | | Max. process+thread count | 3257 (vs. ulimit of 3) | | modules | C: hbase-server hbase-examples U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3448: HBASE-26057 Remove reflections used to access Hadoop 2 API in FanOutOneBlockAsyncDFSOutputHelper
Apache-HBase commented on pull request #3448: URL: https://github.com/apache/hbase/pull/3448#issuecomment-872221355 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | master passed | | +1 :green_heart: | compile | 1m 35s | master passed | | +1 :green_heart: | shadedjars | 8m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 25s | the patch passed | | +1 :green_heart: | compile | 1m 36s | the patch passed | | +1 :green_heart: | javac | 1m 36s | the patch passed | | +1 :green_heart: | shadedjars | 8m 29s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 56s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 158m 19s | hbase-server in the patch passed. | | | | 195m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3448/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3448 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3f2e7431039f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 147b030c1f | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3448/2/testReport/ | | Max. process+thread count | 3365 (vs. ulimit of 3) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3448/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3426: HBASE-26032 Make HRegion.getStores() an O(1) operation
Apache-HBase commented on pull request #3426: URL: https://github.com/apache/hbase/pull/3426#issuecomment-872220599 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 48s | master passed | | +1 :green_heart: | compile | 1m 42s | master passed | | +1 :green_heart: | shadedjars | 8m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 34s | the patch passed | | +1 :green_heart: | compile | 1m 51s | the patch passed | | +1 :green_heart: | javac | 1m 51s | the patch passed | | +1 :green_heart: | shadedjars | 8m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 136m 15s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 2m 10s | hbase-examples in the patch passed. | | | | 173m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3426 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 4cc173c2ff31 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4c7da496ad | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/testReport/ | | Max. process+thread count | 3381 (vs. ulimit of 3) | | modules | C: hbase-server hbase-examples U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3426/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org