[jira] [Created] (HBASE-26045) Master control the global throughtput of all compaction servers
Yulin Niu created HBASE-26045: - Summary: Master control the global throughtput of all compaction servers Key: HBASE-26045 URL: https://issues.apache.org/jira/browse/HBASE-26045 Project: HBase Issue Type: Sub-task Reporter: Yulin Niu Assignee: Yulin Niu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3436: HBASE-26036 DBB released too early and dirty data for checkAndMu…
Apache-HBase commented on pull request #3436: URL: https://github.com/apache/hbase/pull/3436#issuecomment-871117922 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | master passed | | +1 :green_heart: | compile | 3m 58s | master passed | | +1 :green_heart: | checkstyle | 1m 30s | master passed | | +1 :green_heart: | spotbugs | 2m 41s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 36s | the patch passed | | +1 :green_heart: | compile | 3m 58s | the patch passed | | +1 :green_heart: | javac | 3m 58s | the patch passed | | +1 :green_heart: | checkstyle | 1m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 1s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 24s | The patch does not generate ASF License warnings. | | | | 51m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3436 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 74aba352bb67 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 147b030c1f | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] jojochuang opened a new pull request #3443: HBASE-25516 jdk11 reflective access Field.class.getDeclaredField("modifiers") not supported
jojochuang opened a new pull request #3443: URL: https://github.com/apache/hbase/pull/3443 The patch is inspired by powermock: https://github.com/powermock/powermock/pull/1010/files to work around the forbidden access to the modifiers field. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-26044) Add CompactionServer Web UI
Yulin Niu created HBASE-26044: - Summary: Add CompactionServer Web UI Key: HBASE-26044 URL: https://issues.apache.org/jira/browse/HBASE-26044 Project: HBase Issue Type: Sub-task Reporter: Yulin Niu Assignee: Yulin Niu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26043) CompactionServer support compact TTL data
Yulin Niu created HBASE-26043: - Summary: CompactionServer support compact TTL data Key: HBASE-26043 URL: https://issues.apache.org/jira/browse/HBASE-26043 Project: HBase Issue Type: Sub-task Reporter: Yulin Niu Assignee: Yulin Niu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] sunhelly commented on pull request #3436: HBASE-26036 DBB released too early and dirty data for checkAndMu…
sunhelly commented on pull request #3436: URL: https://github.com/apache/hbase/pull/3436#issuecomment-871113549 > Sorry, I do not fully understand the problem here. > > Skimmed the code, the change is to use RegionScanner directly instead of the get method. Does this mean the get method is broken as it releases the ByteBuff too early? But I think we use this method everywhere in the HRegion class, we do not need to change them? > > Thanks. Hi, @Apache9 , yes the get method is broken, the other places using this method should also be changed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371814#comment-17371814 ] Duo Zhang commented on HBASE-26042: --- {quote} Interesting is how more than one thread is able to be inside the synchronize block in mvcc#begin seemingly {quote} Different regions? > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > > Making note of issue seen in production cluster. > Node had been struggling under load for a few days with slow syncs up to 10 > seconds, a few STUCK MVCCs from which it recovered and some java pauses up to > three seconds in length. > Then the below happened: > {code:java} > 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] > wal.AsyncFSWAL: sync > failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer {code} > ... and WAL turned dead in the water. Scanners start expiring. RPC prints > text versions of requests complaining requestsTooSlow. Then we start to see > these: > {code:java} > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=552128301, WAL system stuck? {code} > Whats supposed to happen when other side goes away like this is that we will > roll the WAL – go set up a new one. You can see it happening if you run > {code:java} > mvn test > -Dtest=org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL#testBrokenWriter > {code} > I tried hacking the test to repro the above hang by throwing same exception > in above test (on linux because need epoll to repro) but all just worked. > Thread dumps of the hungup WAL subsystem are a little odd. The log roller is > stuck w/o timeout trying to write a long on the WAL header: > > {code:java} > Thread 9464: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, > line=175 (Compiled frame) > - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, > line=1707 (Compiled frame) > - > java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) > @bci=119, line=3323 (Compiled frame) > - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, > line=1742 (Compiled frame) > - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled > frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) > @bci=16, line=189 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(byte[], > org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALHeader) > @bci=9, line=202 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(org.apache.hadoop.fs.FileSystem, > org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, boolean, > long) @bci=107, line=170 (Compiled frame) > - > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(org.apache.hadoop.conf.Configuration, > org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, boolean, long, > org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup, java.lang.Class) > @bci=61, line=113 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=22, line=651 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=2, line=128 (Compiled frame) > - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(boolean) > @bci=101, line=797 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(long) > @bci=18, line=263 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller.run() @bci=198, line=179 > (Compiled frame) {code} > > Other threads are BLOCKED trying to append the WAL w/ flush markers etc. > unable to add the ringbuffer: > > {code:java} > Thread 9465: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 > (Compiled frame) > -
[jira] [Commented] (HBASE-25729) Upgrade to latest hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-25729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371813#comment-17371813 ] Michael Stack commented on HBASE-25729: --- Thanks [~pankajkumar] ... Then lets leave thirdparty 3.5.1 in for hbase-2.4.5. I'll reclose this issue tomorrow unless other comments out there. Thanks. > Upgrade to latest hbase-thirdparty > -- > > Key: HBASE-25729 > URL: https://issues.apache.org/jira/browse/HBASE-25729 > Project: HBase > Issue Type: Sub-task > Components: build, thirdparty >Affects Versions: 2.4.2 >Reporter: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25960) Build includes unshaded netty .so; clashes w/ downstreamers who would use a different version of netty
[ https://issues.apache.org/jira/browse/HBASE-25960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25960. --- Fix Version/s: thirdparty-4.0.0 thirdparty-3.5.2 Assignee: Michael Stack Resolution: Cannot Reproduce Resolving this issue as "Cannot Reproduce" (could be "Implemented"). Whatever was up w/ thirdparty in older versions no longer seems to happen. I tried to add enforcer rules but no amenable plugin to check a file in a jar. I could add a bit of ant hackery to the thirdparty netty build to set properties if file present and fail bulid if so but it'd be ugly. Closing this out. > Build includes unshaded netty .so; clashes w/ downstreamers who would use a > different version of netty > -- > > Key: HBASE-25960 > URL: https://issues.apache.org/jira/browse/HBASE-25960 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: thirdparty-3.5.2, thirdparty-4.0.0 > > > A coworker was trying to use hbase client in a fat application that uses a > different netty version to what hbase uses internally. Their app would fail > to launch because it kept bumping into an incompatible netty .so lib. Here > are the unshaded netty .so's we bundle looking at hbase-2.4.1...: > ./lib/hbase-shaded-netty-3.4.1.jar has: > {code} > META-INF/native/libnetty_transport_native_epoll_aarch_64.so > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64.so > META-INF/native/libnetty_transport_native_epoll_x86_64.so > {code} > (HBASE-25959 should fix the non-relocation of > libnetty_transport_native_epoll_aarch_64). > ./lib/shaded-clients/hbase-shaded-client-byo-hadoop-2.4.1.1-apple.jar has the > same three .sos as does > ./lib/shaded-clients/hbase-shaded-mapreduce-2.4.1.1-apple.jar > and ./lib/shaded-clients/hbase-shaded-client-2.4.1.1-apple.jar > We even bundle ./lib/netty-all-4.1.17.Final.jar which unsurprisingly has the > netty .sos in it. > Looking at published builds of hbase-thirdparty, I see that these too include > the above trio of .sos... The hbase-shaded-netty includes them in 3.4.1 > https://repo1.maven.org/maven2/org/apache/hbase/thirdparty/hbase-shaded-netty/3.4.1/ > as does 3.5.0. > I just tried running a build of hbase-thirdparty and it does NOT include the > extras > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_aarch_64.so > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64.so > (it has the fix for aarch included... when I built) > Here is link to the snapshot I made: > https://repository.apache.org/content/repositories/orgapachehbase-1451/org/apache/hbase/thirdparty/hbase-shaded-netty/3.5.1-stack4/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371796#comment-17371796 ] Michael Stack commented on HBASE-26042: --- Here is what it looked like when I tried to repro the hang-up in test env: {code:java} 2021-06-29 20:33:35,183 WARN [AsyncFSWAL-0-hdfs://localhost.localdomain:37680/user/mstack/test-data/329a3111-d35d-18ae-9107-065eee2a4e62] wal.AsyncFSWAL(299): sync failed org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: Injected(..) failed: Connection reset by peer 2021-06-29 20:33:35,184 DEBUG [LogRoller] wal.AbstractWALRoller(170): WAL wal:(num 1625024015181) roll requested {code} See how we immediately follow the error with 'WAL wal:(num 1625024015181) roll requested '. Looking at the thread dump, four threads are trying to write a flush marker to the WAL... three have made it inside mvcc#begin and are trying to stamp sequenceid (see thread dump above) while one other is here: {code:java} Thread 9466: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=175 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() @bci=1, line=836 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node, int) @bci=67, line=870 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, line=1199 (Compiled frame) - java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock() @bci=5, line=943 (Compiled frame) - org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(org.apache.hadoop.hbase.wal.WAL, long, java.util.Collection, org.apache.hadoop.hbase.monitoring.MonitoredTask, boolean, org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker) @bci=306, line=2641 (Compiled frame) - org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(org.apache.hadoop.hbase.wal.WAL, long, java.util.Collection, org.apache.hadoop.hbase.monitoring.MonitoredTask, boolean, org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker) @bci=11, line=2573 (Compiled frame) - org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(java.util.Collection, org.apache.hadoop.hbase.monitoring.MonitoredTask, boolean, org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker) @bci=13, line=2547 (Compiled frame) - org.apache.hadoop.hbase.regionserver.HRegion.flushcache(boolean, boolean, org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker) @bci=558, line=2436 (Compiled frame) - org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(org.apache.hadoop.hbase.regionserver.HRegion, boolean, boolean, org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker) @bci=86, line=610 (Compiled frame) - org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(org.apache.hadoop.hbase.regionserver.FlushType) @bci=738, line=288 (Compiled frame) - org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$700(org.apache.hadoop.hbase.regionserver.MemStoreFlusher, org.apache.hadoop.hbase.regionserver.FlushType) @bci=2, line=67 (Compiled frame) - org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run() @bci=128, line=344 (Compiled frame) {code} Then there is the logroller BLOCKED as per above. Looks like a deadlock but I can't see it – yet anyways. > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > > Making note of issue seen in production cluster. > Node had been struggling under load for a few days with slow syncs up to 10 > seconds, a few STUCK MVCCs from which it recovered and some java pauses up to > three seconds in length. > Then the below happened: > {code:java} > 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] > wal.AsyncFSWAL: sync > failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer {code} > ... and WAL turned dead in the water. Scanners start expiring. RPC prints > text versions of requests complaining requestsTooSlow. Then we start to see > these: > {code:java} > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result
[GitHub] [hbase] dingwei-2017 commented on pull request #3423: HBASE-26017 fix pe tool totalRows exceed maximum of int
dingwei-2017 commented on pull request #3423: URL: https://github.com/apache/hbase/pull/3423#issuecomment-871079941 @Apache9 sure. when i run the following command: bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --size=2048 --nomapred --table=TestTable --presplit=100 sequentialWrite 100 the terminal shows the following message: 2021-06-23 06:52:17,654 INFO [TestClient-9] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -193273524 for -21474836 rows 2021-06-23 06:52:17,654 INFO [TestClient-8] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -171798688 for -21474836 rows 2021-06-23 06:52:17,654 INFO [TestClient-13] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -279172868 for -21474836 rows 2021-06-23 06:52:17,654 INFO [TestClient-10] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -214748360 for -21474836 rows 2021-06-23 06:52:17,654 INFO [TestClient-5] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -107374180 for -21474836 rows 2021-06-23 06:52:17,655 INFO [TestClient-14] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -300647704 for -21474836 rows 2021-06-23 06:52:17,654 INFO [TestClient-1] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -21474836 for -21474836 rows This is because opts.totalRows exceeds the max value of Int. rowsPerGB is 1024 * 1024 by default, when opts.size is larger than 2048, opts.size * rowPerGB is larger than 2147483648. the code is in calculateRowsAndSize api. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3442: HBASE-26041 Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
Apache-HBase commented on pull request #3442: URL: https://github.com/apache/hbase/pull/3442#issuecomment-871078621 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 57s | master passed | | +1 :green_heart: | compile | 0m 51s | master passed | | +1 :green_heart: | checkstyle | 0m 26s | master passed | | +1 :green_heart: | spotbugs | 0m 47s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 38s | the patch passed | | +1 :green_heart: | compile | 0m 49s | the patch passed | | +1 :green_heart: | javac | 0m 49s | the patch passed | | -0 :warning: | checkstyle | 0m 24s | hbase-common: The patch generated 6 new + 8 unchanged - 0 fixed = 14 total (was 8) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 27s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 38m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3442 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 43fa7ca0ea06 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 51893b9ba3 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-common U: hbase-common | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26042) WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[ https://issues.apache.org/jira/browse/HBASE-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-26042: -- Summary: WAL lockup on 'sync failed' org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer (was: WAL lockup on 'sync failed') > WAL lockup on 'sync failed' > org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer > > > Key: HBASE-26042 > URL: https://issues.apache.org/jira/browse/HBASE-26042 > Project: HBase > Issue Type: Bug >Affects Versions: 2.3.5 >Reporter: Michael Stack >Priority: Major > > Making note of issue seen in production cluster. > Node had been struggling under load for a few days with slow syncs up to 10 > seconds, a few STUCK MVCCs from which it recovered and some java pauses up to > three seconds in length. > Then the below happened: > {code:java} > 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] > wal.AsyncFSWAL: sync > failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: > readAddress(..) failed: Connection reset by peer {code} > ... and WAL turned dead in the water. Scanners start expiring. RPC prints > text versions of requests complaining requestsTooSlow. Then we start to see > these: > {code:java} > org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync > result after 30 ms for txid=552128301, WAL system stuck? {code} > Whats supposed to happen when other side goes away like this is that we will > roll the WAL – go set up a new one. You can see it happening if you run > {code:java} > mvn test > -Dtest=org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL#testBrokenWriter > {code} > I tried hacking the test to repro the above hang by throwing same exception > in above test (on linux because need epoll to repro) but all just worked. > Thread dumps of the hungup WAL subsystem are a little odd. The log roller is > stuck w/o timeout trying to write a long on the WAL header: > > {code:java} > Thread 9464: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, > line=175 (Compiled frame) > - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, > line=1707 (Compiled frame) > - > java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) > @bci=119, line=3323 (Compiled frame) > - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, > line=1742 (Compiled frame) > - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled > frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) > @bci=16, line=189 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(byte[], > org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALHeader) > @bci=9, line=202 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(org.apache.hadoop.fs.FileSystem, > org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, boolean, > long) @bci=107, line=170 (Compiled frame) > - > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(org.apache.hadoop.conf.Configuration, > org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, boolean, long, > org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup, java.lang.Class) > @bci=61, line=113 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=22, line=651 (Compiled frame) > - > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) > @bci=2, line=128 (Compiled frame) > - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(boolean) > @bci=101, line=797 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(long) > @bci=18, line=263 (Compiled frame) > - org.apache.hadoop.hbase.wal.AbstractWALRoller.run() @bci=198, line=179 > (Compiled frame) {code} > > Other threads are BLOCKED trying to append the WAL w/ flush markers etc. > unable to add the ringbuffer: > > {code:java} > Thread 9465: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 > (Compiled frame) > -
[jira] [Created] (HBASE-26042) WAL lockup on 'sync failed'
Michael Stack created HBASE-26042: - Summary: WAL lockup on 'sync failed' Key: HBASE-26042 URL: https://issues.apache.org/jira/browse/HBASE-26042 Project: HBase Issue Type: Bug Affects Versions: 2.3.5 Reporter: Michael Stack Making note of issue seen in production cluster. Node had been struggling under load for a few days with slow syncs up to 10 seconds, a few STUCK MVCCs from which it recovered and some java pauses up to three seconds in length. Then the below happened: {code:java} 2021-06-27 13:41:27,604 WARN [AsyncFSWAL-0-hdfs://:8020/hbase] wal.AsyncFSWAL: sync failedorg.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer {code} ... and WAL turned dead in the water. Scanners start expiring. RPC prints text versions of requests complaining requestsTooSlow. Then we start to see these: {code:java} org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 30 ms for txid=552128301, WAL system stuck? {code} Whats supposed to happen when other side goes away like this is that we will roll the WAL – go set up a new one. You can see it happening if you run {code:java} mvn test -Dtest=org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL#testBrokenWriter {code} I tried hacking the test to repro the above hang by throwing same exception in above test (on linux because need epoll to repro) but all just worked. Thread dumps of the hungup WAL subsystem are a little odd. The log roller is stuck w/o timeout trying to write a long on the WAL header: {code:java} Thread 9464: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=175 (Compiled frame) - java.util.concurrent.CompletableFuture$Signaller.block() @bci=19, line=1707 (Compiled frame) - java.util.concurrent.ForkJoinPool.managedBlock(java.util.concurrent.ForkJoinPool$ManagedBlocker) @bci=119, line=3323 (Compiled frame) - java.util.concurrent.CompletableFuture.waitingGet(boolean) @bci=115, line=1742 (Compiled frame) - java.util.concurrent.CompletableFuture.get() @bci=11, line=1908 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(java.util.function.Consumer) @bci=16, line=189 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(byte[], org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALHeader) @bci=9, line=202 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration, boolean, long) @bci=107, line=170 (Compiled frame) - org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(org.apache.hadoop.conf.Configuration, org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path, boolean, long, org.apache.hbase.thirdparty.io.netty.channel.EventLoopGroup, java.lang.Class) @bci=61, line=113 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) @bci=22, line=651 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(org.apache.hadoop.fs.Path) @bci=2, line=128 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(boolean) @bci=101, line=797 (Compiled frame) - org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(long) @bci=18, line=263 (Compiled frame) - org.apache.hadoop.hbase.wal.AbstractWALRoller.run() @bci=198, line=179 (Compiled frame) {code} Other threads are BLOCKED trying to append the WAL w/ flush markers etc. unable to add the ringbuffer: {code:java} Thread 9465: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 (Compiled frame) - com.lmax.disruptor.MultiProducerSequencer.next(int) @bci=82, line=136 (Compiled frame) - com.lmax.disruptor.MultiProducerSequencer.next() @bci=2, line=105 (Interpreted frame) - com.lmax.disruptor.RingBuffer.next() @bci=4, line=263 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$1(org.apache.commons.lang3.mutable.MutableLong, com.lmax.disruptor.RingBuffer) @bci=2, line=1031 (Compiled frame) - org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$270.run() @bci=8 (Compiled frame) - org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(java.lang.Runnable) @bci=36, line=140 (Interpreted frame) -
[GitHub] [hbase] Apache-HBase commented on pull request #3442: HBASE-26041 Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
Apache-HBase commented on pull request #3442: URL: https://github.com/apache/hbase/pull/3442#issuecomment-871077131 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 4s | master passed | | +1 :green_heart: | compile | 0m 27s | master passed | | +1 :green_heart: | shadedjars | 9m 9s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | shadedjars | 9m 2s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 12s | hbase-common in the patch passed. | | | | 34m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3442 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 991222113ada 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 51893b9ba3 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/testReport/ | | Max. process+thread count | 212 (vs. ulimit of 3) | | modules | C: hbase-common U: hbase-common | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3442: HBASE-26041 Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
Apache-HBase commented on pull request #3442: URL: https://github.com/apache/hbase/pull/3442#issuecomment-871076440 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 18s | master passed | | +1 :green_heart: | compile | 0m 23s | master passed | | +1 :green_heart: | shadedjars | 8m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | shadedjars | 8m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 52s | hbase-common in the patch passed. | | | | 32m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3442 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e2dd2ae2bb6a 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 51893b9ba3 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/testReport/ | | Max. process+thread count | 258 (vs. ulimit of 3) | | modules | C: hbase-common U: hbase-common | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3442/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-26029) It is not reliable to use nodeDeleted event to track region server's death
[ https://issues.apache.org/jira/browse/HBASE-26029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26029. --- Fix Version/s: 2.5.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Release Note: Introduce a new step in ServerCrashProcedure to move the replication queues of the dead region server to other live region servers, as this is the only reliable way to get the death event of a region server. The old ReplicationTracker related code have all been purged as they are not used any more. Resolution: Fixed Pushed to master and branch-2. Thanks all for chimming in and reviewing. > It is not reliable to use nodeDeleted event to track region server's death > -- > > Key: HBASE-26029 > URL: https://issues.apache.org/jira/browse/HBASE-26029 > Project: HBase > Issue Type: Bug > Components: Replication, Zookeeper >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0 > > > When implementing HBASE-26011, [~sunxin] pointed out an interesting scenario, > where a region server up and down between two sync requests, then we can not > know the death of the region server. > https://github.com/apache/hbase/pull/3405#discussion_r656720923 > This is a valid point, and when thinking of a solution, I noticed that, the > current zk iplementation has the same problem. Notice that, a watcher on zk > can only be triggered once, so after zk triggers the watcher, and before you > set a new watcher, it is possible that a region server is up and down, and > you will miss the nodeDeleted event for this region server. > I think, the general approach here, which could works for both master based > and zk based replication tracker is that, we should not rely on the tracker > to tell you which region server is dead. Instead, we just provide the list of > live regionservers, and the upper layer should compare this list with the > expected list(for replication, the list should be gotten by listing > replicators), to detect the dead region servers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26011) Introduce a new API to sync the live region server list more effectively
[ https://issues.apache.org/jira/browse/HBASE-26011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26011. --- Resolution: Invalid > Introduce a new API to sync the live region server list more effectively > > > Key: HBASE-26011 > URL: https://issues.apache.org/jira/browse/HBASE-26011 > Project: HBase > Issue Type: Sub-task > Components: master, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > In HBASE-25976, we introduced a master based replication tracker to not > depend on zk, but the problem is that we will always fetch the whole region > server list, from every region server, which will become a performance issue > when the cluster has lots of region servers. > So we need to introduce a new API to sync the live region server list. As the > region server list will not be changed too much, we should have a way to > detect this and not pass the full list. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 closed pull request #3405: HBASE-26011 Introduce a new API to sync the live region server list m…
Apache9 closed pull request #3405: URL: https://github.com/apache/hbase/pull/3405 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3437: HBASE-26037 Implement namespace and table level access control for thrift & thrift2
Apache-HBase commented on pull request #3437: URL: https://github.com/apache/hbase/pull/3437#issuecomment-871072528 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 23s | master passed | | +1 :green_heart: | compile | 1m 0s | master passed | | +1 :green_heart: | checkstyle | 0m 46s | master passed | | +1 :green_heart: | spotbugs | 1m 30s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 7s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | -0 :warning: | checkstyle | 0m 43s | hbase-thrift: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 21m 22s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 1m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 47m 9s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3437/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3437 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 71ab0509933c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 51893b9ba3 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3437/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-thrift.txt | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-thrift U: hbase-thrift | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3437/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bsglz merged pull request #3420: HBASE-26028 The view as json page shows exception when using TinyLfuB…
bsglz merged pull request #3420: URL: https://github.com/apache/hbase/pull/3420 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] jojochuang opened a new pull request #3442: HBASE-26041 Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
jojochuang opened a new pull request #3442: URL: https://github.com/apache/hbase/pull/3442 HBASE-13710 copied ReflectionUtils utility class from Hadoop, so we no longer need to use reflection to access the Hadoop's one. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26041) Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()
[ https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-26041: Summary: Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo() (was: Replace PrintThreadInfoLazyHolder's reflection usage) > Replace PrintThreadInfoHelper with HBase's own > ReflectionUtils.printThreadInfo() > > > Key: HBASE-26041 > URL: https://issues.apache.org/jira/browse/HBASE-26041 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > PrintThreadInfoLazyHolder uses reflection to access Hadoop's > ReflectionUtils.printThreadInfo(). Replace it with HBase's > ReflectionUtils.printThreadInfo(). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26041) Replace PrintThreadInfoLazyHolder's reflection usage
[ https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-26041: --- Assignee: Wei-Chiu Chuang > Replace PrintThreadInfoLazyHolder's reflection usage > > > Key: HBASE-26041 > URL: https://issues.apache.org/jira/browse/HBASE-26041 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > PrintThreadInfoLazyHolder uses reflection to access Hadoop's > ReflectionUtils.printThreadInfo(). Replace it with HBase's > ReflectionUtils.printThreadInfo(). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26041) Replace PrintThreadInfoLazyHolder's reflection usage
Wei-Chiu Chuang created HBASE-26041: --- Summary: Replace PrintThreadInfoLazyHolder's reflection usage Key: HBASE-26041 URL: https://issues.apache.org/jira/browse/HBASE-26041 Project: HBase Issue Type: Sub-task Reporter: Wei-Chiu Chuang PrintThreadInfoLazyHolder uses reflection to access Hadoop's ReflectionUtils.printThreadInfo(). Replace it with HBase's ReflectionUtils.printThreadInfo(). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26040) Replace reflections that are redundant
[ https://issues.apache.org/jira/browse/HBASE-26040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-26040: --- Assignee: Wei-Chiu Chuang > Replace reflections that are redundant > -- > > Key: HBASE-26040 > URL: https://issues.apache.org/jira/browse/HBASE-26040 > Project: HBase > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > A number of reflections were used to access (back in the time) new Hadoop > APIs that were only available in newer Hadoop versions. > Some of them are no longer needed with the default Hadoop dependency 3.1.2, > so they can be removed to avoid the brittle code. Also, makes it possible to > determine compile time dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization
[ https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-26021: - Component/s: master > HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization > -- > > Key: HBASE-26021 > URL: https://issues.apache.org/jira/browse/HBASE-26021 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.7.0, 2.4.4 >Reporter: Viraj Jasani >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 1.7.1 > > Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot > 2021-06-22 at 12.54.30 PM.png > > > As of today, if we bring up HBase cluster using branch-1 and upgrade to > branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. > Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil > seems to be producing "*\n hbase meta*" and "*\n hbase namespace*" > {code:java} > 2021-06-22 00:05:56,611 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > regionserver.RSRpcServices: Open hbase:meta,,1.1588230740 > 2021-06-22 00:05:56,648 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > regionserver.RSRpcServices: Open > hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a. > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R namespace > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R^Dmeta > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937) > at >
[jira] [Updated] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization
[ https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-26021: - Labels: compatibility incompatibility serialization upgrade (was: ) > HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization > -- > > Key: HBASE-26021 > URL: https://issues.apache.org/jira/browse/HBASE-26021 > Project: HBase > Issue Type: Bug > Components: master >Affects Versions: 1.7.0, 2.4.4 >Reporter: Viraj Jasani >Assignee: Bharath Vissapragada >Priority: Major > Labels: compatibility, incompatibility, serialization, upgrade > Fix For: 1.7.1 > > Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot > 2021-06-22 at 12.54.30 PM.png > > > As of today, if we bring up HBase cluster using branch-1 and upgrade to > branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. > Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil > seems to be producing "*\n hbase meta*" and "*\n hbase namespace*" > {code:java} > 2021-06-22 00:05:56,611 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > regionserver.RSRpcServices: Open hbase:meta,,1.1588230740 > 2021-06-22 00:05:56,648 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > regionserver.RSRpcServices: Open > hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a. > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R namespace > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R^Dmeta > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at >
[jira] [Resolved] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization
[ https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada resolved HBASE-26021. -- Fix Version/s: 1.7.1 Resolution: Fixed Merged the change, thanks for the reviews. Spoke with [~vjasani] offline, he mentioned he will confirm whether this patch works for him in his local test setup. I volunteer to do a 1.7.0.1 if the patch works for Viraj. (unless there are any objections). > HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization > -- > > Key: HBASE-26021 > URL: https://issues.apache.org/jira/browse/HBASE-26021 > Project: HBase > Issue Type: Bug >Affects Versions: 1.7.0, 2.4.4 >Reporter: Viraj Jasani >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 1.7.1 > > Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot > 2021-06-22 at 12.54.30 PM.png > > > As of today, if we bring up HBase cluster using branch-1 and upgrade to > branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. > Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil > seems to be producing "*\n hbase meta*" and "*\n hbase namespace*" > {code:java} > 2021-06-22 00:05:56,611 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > regionserver.RSRpcServices: Open hbase:meta,,1.1588230740 > 2021-06-22 00:05:56,648 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > regionserver.RSRpcServices: Open > hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a. > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R namespace > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R^Dmeta > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at >
[GitHub] [hbase] bharathv merged pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
bharathv merged pull request #3435: URL: https://github.com/apache/hbase/pull/3435 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3436: HBASE-26036 ByteBuff released too early and dirty data for checkAndMu…
Apache9 commented on pull request #3436: URL: https://github.com/apache/hbase/pull/3436#issuecomment-871045818 Sorry, I do not fully understand the problem here. Skimmed the code, the change is to use RegionScanner directly instead of the get method. Does this mean the get method is broken as it releases the ByteBuff too early? But I think we use this method everywhere in the HRegion class, we do not need to change them? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-26040) Replace reflections that are redundant
Wei-Chiu Chuang created HBASE-26040: --- Summary: Replace reflections that are redundant Key: HBASE-26040 URL: https://issues.apache.org/jira/browse/HBASE-26040 Project: HBase Issue Type: Improvement Reporter: Wei-Chiu Chuang A number of reflections were used to access (back in the time) new Hadoop APIs that were only available in newer Hadoop versions. Some of them are no longer needed with the default Hadoop dependency 3.1.2, so they can be removed to avoid the brittle code. Also, makes it possible to determine compile time dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
Apache-HBase commented on pull request #3435: URL: https://github.com/apache/hbase/pull/3435#issuecomment-871030080 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 53s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 1s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 26 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 2m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 8m 5s | branch-1 passed | | +1 :green_heart: | compile | 1m 46s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 1m 59s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 6m 6s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 29s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 40s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 2m 42s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 19s | branch-1 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 51s | the patch passed | | +1 :green_heart: | compile | 1m 42s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | cc | 1m 42s | the patch passed | | +1 :green_heart: | javac | 1m 42s | the patch passed | | +1 :green_heart: | compile | 2m 1s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | cc | 2m 1s | the patch passed | | +1 :green_heart: | javac | 2m 1s | the patch passed | | -1 :x: | checkstyle | 0m 47s | hbase-client: The patch generated 11 new + 447 unchanged - 1 fixed = 458 total (was 448) | | -1 :x: | checkstyle | 2m 24s | hbase-server: The patch generated 46 new + 1843 unchanged - 92 fixed = 1889 total (was 1935) | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedjars | 2m 52s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 29s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | hbaseprotoc | 2m 14s | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 39s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 7m 50s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 29s | hbase-protocol in the patch passed. | | +1 :green_heart: | unit | 2m 45s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 133m 34s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 13m 56s | hbase-rsgroup in the patch passed. | | +1 :green_heart: | asflicense | 1m 52s | The patch does not generate ASF License warnings. | | | | 232m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3435/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3435 | | JIRA Issue | HBASE-26021 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux ed0806b1ec48 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3435/out/precommit/personality/provided.sh | | git revision | branch-1 / 395eb0c | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems,
[GitHub] [hbase] Apache9 merged pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache9 merged pull request #3430: URL: https://github.com/apache/hbase/pull/3430 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-26039) TestReplicationKillRS is useless after HBASE-23956
[ https://issues.apache.org/jira/browse/HBASE-26039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26039. --- Fix Version/s: 2.4.5 2.3.6 2.5.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.3+. Thanks [~stack] for reviewing. > TestReplicationKillRS is useless after HBASE-23956 > -- > > Key: HBASE-26039 > URL: https://issues.apache.org/jira/browse/HBASE-26039 > Project: HBase > Issue Type: Bug > Components: Replication, test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > In TestReplicationKillRS, we assume there are at least 2 regionservers but in > HBASE-23956, we set the region server number to 1. > So in fact, we do not kill any region servers which makes the tests useless... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25960) Build includes unshaded netty .so; clashes w/ downstreamers who would use a different version of netty
[ https://issues.apache.org/jira/browse/HBASE-25960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371746#comment-17371746 ] Michael Stack commented on HBASE-25960: --- I build a snapshot of hbase-.2.4.4 with thirdparty 3.5.1 and gave it to my coworker to test. It was clean of errant .sos... and coworker reported hbase now works in their context. Let me add a check for errant .sos... so they don't sneak back into our build. > Build includes unshaded netty .so; clashes w/ downstreamers who would use a > different version of netty > -- > > Key: HBASE-25960 > URL: https://issues.apache.org/jira/browse/HBASE-25960 > Project: HBase > Issue Type: Bug > Components: build >Reporter: Michael Stack >Priority: Major > > A coworker was trying to use hbase client in a fat application that uses a > different netty version to what hbase uses internally. Their app would fail > to launch because it kept bumping into an incompatible netty .so lib. Here > are the unshaded netty .so's we bundle looking at hbase-2.4.1...: > ./lib/hbase-shaded-netty-3.4.1.jar has: > {code} > META-INF/native/libnetty_transport_native_epoll_aarch_64.so > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64.so > META-INF/native/libnetty_transport_native_epoll_x86_64.so > {code} > (HBASE-25959 should fix the non-relocation of > libnetty_transport_native_epoll_aarch_64). > ./lib/shaded-clients/hbase-shaded-client-byo-hadoop-2.4.1.1-apple.jar has the > same three .sos as does > ./lib/shaded-clients/hbase-shaded-mapreduce-2.4.1.1-apple.jar > and ./lib/shaded-clients/hbase-shaded-client-2.4.1.1-apple.jar > We even bundle ./lib/netty-all-4.1.17.Final.jar which unsurprisingly has the > netty .sos in it. > Looking at published builds of hbase-thirdparty, I see that these too include > the above trio of .sos... The hbase-shaded-netty includes them in 3.4.1 > https://repo1.maven.org/maven2/org/apache/hbase/thirdparty/hbase-shaded-netty/3.4.1/ > as does 3.5.0. > I just tried running a build of hbase-thirdparty and it does NOT include the > extras > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_aarch_64.so > META-INF/native/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64.so > (it has the fix for aarch included... when I built) > Here is link to the snapshot I made: > https://repository.apache.org/content/repositories/orgapachehbase-1451/org/apache/hbase/thirdparty/hbase-shaded-netty/3.5.1-stack4/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3440: HBASE-26039 TestReplicationKillRS is useless after HBASE-23956
Apache9 merged pull request #3440: URL: https://github.com/apache/hbase/pull/3440 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv edited a comment on pull request #3438: HBASE-22923 min version of RegionServer to move system table regions
bharathv edited a comment on pull request #3438: URL: https://github.com/apache/hbase/pull/3438#issuecomment-870979844 Also, the "localhost" part is suspicious, why does AM look for a localhost address? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on pull request #3438: HBASE-22923 min version of RegionServer to move system table regions
bharathv commented on pull request #3438: URL: https://github.com/apache/hbase/pull/3438#issuecomment-870979844 Also, the "localhost" part is suspcicious, why does AM look for a localhost address? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #3438: HBASE-22923 min version of RegionServer to move system table regions
bharathv commented on a change in pull request #3438: URL: https://github.com/apache/hbase/pull/3438#discussion_r661023484 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -2564,6 +2588,45 @@ public int compare(Pair o1, Pair o2) { return res; } + /** + * Get a list of servers that this region can not assign to. + * For system table, we must assign them to a server with highest version. + * This method is same as {@link #getExcludedServersForSystemTable()} with + * the only difference is we can disable this exclusion using config: + * "hbase.min.version.move.system.tables". + * + * @return List of Excluded servers for System table regions. + */ + private List getExcludedServersForSystemTableUnlessAllowed() { Review comment: Can we club the logic of both the methods and pass an optional boolean for version check? The two functions that look very much alike except for the tail part. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3438: HBASE-22923 min version of RegionServer to move system table regions
apurtell commented on a change in pull request #3438: URL: https://github.com/apache/hbase/pull/3438#discussion_r660994432 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java ## @@ -253,6 +253,28 @@ // are persisted in meta with a state store private final RegionStateStore regionStateStore; + /** + * Min version to consider for moving system tables regions to higher + * versioned RS. If RS has higher version than rest of the cluster but that + * version is less than this value, we should not move system table regions + * to that RS. If RS has higher version than rest of the cluster but that + * version is greater than or equal to this value, we should move system + * table regions to that RS. This is optional config and default value is + * empty string ({@link #DEFAULT_MIN_VERSION_MOVE_SYS_TABLES_CONFIG}). + * For instance, if we do not want meta region to be moved to RS with higher + * version until that version is >= 2.0.0, then we can configure + * "hbase.min.version.move.system.tables" as "2.0.0". + * When operator uses this config, it should be used with care, meaning + * we should be confident that even if user table regions come to RS with + * higher version (that rest of cluster), it would not cause any Review comment: The language here is ambiguous. Better to say something like > When the operator uses this configuration option, any version between the current version and the new value of "hbase.min.version.move.system.tables" does not trigger any region movement. It is assumed the configured range of versions do not require special handling. This should also be committed to all branches, not just branch-1, for consistent functionality across all future releasing versions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
bharathv commented on pull request #3435: URL: https://github.com/apache/hbase/pull/3435#issuecomment-870935942 Most checkstyle issues are from the original implementation, too many to even fix. Leaving them as-is. Will wait for one final jenkins run before merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3441: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3441: URL: https://github.com/apache/hbase/pull/3441#issuecomment-870910968 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 6m 3s | master passed | | +1 :green_heart: | compile | 2m 11s | master passed | | +1 :green_heart: | shadedjars | 10m 16s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 22s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 37s | the patch passed | | +1 :green_heart: | compile | 2m 14s | the patch passed | | +1 :green_heart: | javac | 2m 14s | the patch passed | | +1 :green_heart: | shadedjars | 10m 34s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 0s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 193m 37s | hbase-server in the patch passed. | | | | 239m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3441 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 01a14c6f8c14 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/testReport/ | | Max. process+thread count | 2418 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3417: URL: https://github.com/apache/hbase/pull/3417#issuecomment-870875203 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 54s | branch-2.4 passed | | +1 :green_heart: | compile | 1m 26s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 6m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 43s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 6m 35s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 32s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 211m 25s | hbase-server in the patch passed. | | | | 249m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3417 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux fd0b84b4ecee 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 270b3facce | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/testReport/ | | Max. process+thread count | 2776 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3417: URL: https://github.com/apache/hbase/pull/3417#issuecomment-870867614 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 38s | branch-2.4 passed | | +1 :green_heart: | compile | 1m 45s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 7m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 1m 44s | the patch passed | | +1 :green_heart: | javac | 1m 44s | the patch passed | | +1 :green_heart: | shadedjars | 7m 25s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 38s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 199m 12s | hbase-server in the patch passed. | | | | 235m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3417 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1def96eae5dd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 270b3facce | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/testReport/ | | Max. process+thread count | 2448 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3441: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3441: URL: https://github.com/apache/hbase/pull/3441#issuecomment-870863841 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 50s | master passed | | +1 :green_heart: | compile | 1m 27s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | +1 :green_heart: | compile | 1m 28s | the patch passed | | +1 :green_heart: | javac | 1m 28s | the patch passed | | +1 :green_heart: | shadedjars | 8m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 129m 59s | hbase-server in the patch passed. | | | | 163m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3441 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 832eccdbf1e3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/testReport/ | | Max. process+thread count | 4123 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache-HBase commented on pull request #3430: URL: https://github.com/apache/hbase/pull/3430#issuecomment-870860962 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 15s | master passed | | +1 :green_heart: | compile | 2m 10s | master passed | | +1 :green_heart: | shadedjars | 8m 58s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 5s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 2m 8s | the patch passed | | +1 :green_heart: | javac | 2m 8s | the patch passed | | +1 :green_heart: | shadedjars | 9m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 48s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 0m 39s | hbase-replication in the patch passed. | | +1 :green_heart: | unit | 207m 10s | hbase-server in the patch passed. | | | | 245m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3430 | | Optional Tests | unit javac javadoc shadedjars compile | | uname | Linux 5ccf9e2af5bf 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/testReport/ | | Max. process+thread count | 2327 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
Apache-HBase commented on pull request #3435: URL: https://github.com/apache/hbase/pull/3435#issuecomment-870857224 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 25 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 2m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 8m 4s | branch-1 passed | | +1 :green_heart: | compile | 1m 46s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 2m 1s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 6m 13s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 14s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 32s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 37s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 2m 54s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 45s | branch-1 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 58s | the patch passed | | +1 :green_heart: | compile | 1m 48s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | cc | 1m 48s | the patch passed | | +1 :green_heart: | javac | 1m 48s | the patch passed | | +1 :green_heart: | compile | 2m 1s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | cc | 2m 1s | the patch passed | | +1 :green_heart: | javac | 2m 1s | the patch passed | | -1 :x: | checkstyle | 0m 47s | hbase-client: The patch generated 11 new + 447 unchanged - 1 fixed = 458 total (was 448) | | -1 :x: | checkstyle | 2m 15s | hbase-server: The patch generated 46 new + 1841 unchanged - 92 fixed = 1887 total (was 1933) | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedjars | 2m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 50s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | hbaseprotoc | 2m 20s | the patch passed | | +1 :green_heart: | javadoc | 1m 20s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 43s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 8m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 28s | hbase-protocol in the patch passed. | | +1 :green_heart: | unit | 2m 43s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 140m 3s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 14m 13s | hbase-rsgroup in the patch passed. | | +1 :green_heart: | asflicense | 1m 53s | The patch does not generate ASF License warnings. | | | | 234m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3435/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3435 | | JIRA Issue | HBASE-26021 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux 0dd0e2178cdc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3435/out/precommit/personality/provided.sh | | git revision | branch-1 / 395eb0c | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems,
[GitHub] [hbase] saintstack commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
saintstack commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660901446 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -953,9 +956,26 @@ private void finishActiveMasterInitialization(MonitoredTask status) if (!waitForMetaOnline()) { return; } +TableDescriptor metaDescriptor = tableDescriptors.get( +TableName.META_TABLE_NAME); +final ColumnFamilyDescriptor tableFamilyDesc = metaDescriptor +.getColumnFamily(HConstants.TABLE_FAMILY); +final ColumnFamilyDescriptor replBarrierFamilyDesc = +metaDescriptor.getColumnFamily(HConstants.REPLICATION_BARRIER_FAMILY); + this.assignmentManager.joinCluster(); // The below depends on hbase:meta being online. -this.tableStateManager.start(); +try { + this.tableStateManager.start(); +} catch (NoSuchColumnFamilyException e) { + if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { Review comment: I've run various versions of hbase-2.0-hbase2.2.x upgrades successfully... It was when I tried to go from an hbase1.2 version straight to hbase2.3 that I ran into this issue. Otherwise, agree w/ your thinking around repl_barrier and table CF. Is there a case that one might be in place but not the other -- I don't know-- and does the code do right thing(I've not checked) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache-HBase commented on pull request #3430: URL: https://github.com/apache/hbase/pull/3430#issuecomment-870849409 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 48s | master passed | | +1 :green_heart: | compile | 2m 44s | master passed | | +1 :green_heart: | shadedjars | 9m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 44s | the patch passed | | +1 :green_heart: | compile | 2m 41s | the patch passed | | +1 :green_heart: | javac | 2m 41s | the patch passed | | +1 :green_heart: | shadedjars | 9m 3s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 4s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 0m 40s | hbase-replication in the patch passed. | | +1 :green_heart: | unit | 183m 59s | hbase-server in the patch passed. | | | | 225m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3430 | | Optional Tests | unit javac javadoc shadedjars compile | | uname | Linux eed67d3ddb81 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/testReport/ | | Max. process+thread count | 2449 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26031) Validate nightly builds run on new ci workers hbase11-hbase15
[ https://issues.apache.org/jira/browse/HBASE-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371570#comment-17371570 ] Michael Stack commented on HBASE-26031: --- Thanks for the help [~busbey] > Validate nightly builds run on new ci workers hbase11-hbase15 > - > > Key: HBASE-26031 > URL: https://issues.apache.org/jira/browse/HBASE-26031 > Project: HBase > Issue Type: Task > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: image-2021-06-24-16-14-03-721.png > > > Per slack, asf infra has finished adding in nodes hbase10-hbase15 to > ci-hadoop. > make sure they can run nightly. > # Set labels for all these node to "hbase-staging" > # Push a feature branch off of current HEAD that updates the agent labels to > use "hbase-staging" > # trigger a bunch of runs. make sure *something* runs on each of the nodes > # Set labels for the nodes to "hbase" > # delete feature branch -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25990) Add donated buildbots for jenkins
[ https://issues.apache.org/jira/browse/HBASE-25990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25990. --- Resolution: Incomplete Closing. I was unable to hook up billing. Might be back but this present effort is being aborted. > Add donated buildbots for jenkins > - > > Key: HBASE-25990 > URL: https://issues.apache.org/jira/browse/HBASE-25990 > Project: HBase > Issue Type: Task > Components: build >Reporter: Michael Stack >Priority: Major > Attachments: Screen Shot 2021-06-22 at 1.43.12 PM.png > > > This issue is for keeping notes on how to add a donated buildbot to our > apache build. > My employer donated budget (I badly under-estimated cost but whatever...). > This issue is about adding 5 GCP nodes. > There is this page up on apache on donating machines for build > https://infra.apache.org/hosting-external-agent.html It got me some of the > ways... at least as far as the bit about mailing root@a.o(nada). > At [~zhangduo]'s encouragement -- he has been this route already adding in > the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, > INFRA-21973. > I then reached out on slack and the gentleman Gavin MacDonald picked up the > task. > He told me run this script on all hosts after making edits (comment out line > #39 where we set hostname -- doesn't work): > https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh > (For more context on the above script and as a good backgrounder, see the > nice C* page on how to do this setup: > https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) > After doing the above, I had to do a visudo on each host to add a line for an > infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
virajjasani commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660849609 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java ## @@ -139,6 +140,31 @@ public static TableDescriptor tryUpdateAndGetMetaTableDescriptor(Configuration c } } + public static ColumnFamilyDescriptor getTableFamilyDesc( Review comment: Let me fix the name. Reg the class to keep it, maybe it is fine here because it is primarily used by FSTD only. Just for upgrade case, master needs it. But I am open to keeping it in other util class if there is better suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
virajjasani commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660847616 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1047,6 +1077,29 @@ private void finishActiveMasterInitialization(MonitoredTask status) // Set master as 'initialized'. setInitialized(true); +if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { Review comment: I was thinking that too but for HBase 1 to 2.3+ upgrade, both will be null for sure so with this case, we are targeting this specific upgrade case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
virajjasani commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660846981 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -953,9 +956,26 @@ private void finishActiveMasterInitialization(MonitoredTask status) if (!waitForMetaOnline()) { return; } +TableDescriptor metaDescriptor = tableDescriptors.get( +TableName.META_TABLE_NAME); +final ColumnFamilyDescriptor tableFamilyDesc = metaDescriptor +.getColumnFamily(HConstants.TABLE_FAMILY); +final ColumnFamilyDescriptor replBarrierFamilyDesc = +metaDescriptor.getColumnFamily(HConstants.REPLICATION_BARRIER_FAMILY); + this.assignmentManager.joinCluster(); // The below depends on hbase:meta being online. -this.tableStateManager.start(); +try { + this.tableStateManager.start(); +} catch (NoSuchColumnFamilyException e) { + if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { Review comment: This PR is for 2.3+ releases meaning if we come from HBase 1 to 2.3+, the transition should be seamless. Hence, for this upgrade case, both table and repl_barrier will be missing. > When its 2.0.x to 2.3.x upgrade, the repBarrier will get auto created? I have not tried this one. If table CF is handled, I guess repl_barrier too would have been handled. As per @saintstack's testing, the cluster had to come from 1.2 to earlier than 2.3 release and from that release, it went on to 2.3, hence I am assuming HBase 2 (< 2.3) to 2.3 upgrade should have been smooth. @saintstack thoughts? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3441: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3441: URL: https://github.com/apache/hbase/pull/3441#issuecomment-870798309 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 4m 13s | master passed | | +1 :green_heart: | checkstyle | 1m 42s | master passed | | +1 :green_heart: | spotbugs | 3m 15s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 4m 17s | the patch passed | | +1 :green_heart: | javac | 4m 17s | the patch passed | | +1 :green_heart: | checkstyle | 1m 41s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 58s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 3m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 57m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3441 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux fa0675fbb78b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3441/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25990) Add donated buildbots for jenkins
[ https://issues.apache.org/jira/browse/HBASE-25990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371542#comment-17371542 ] Sean Busbey commented on HBASE-25990: - also filed INFRA-22057 to have them removed from the ci-hadoop jenkins coordinator. > Add donated buildbots for jenkins > - > > Key: HBASE-25990 > URL: https://issues.apache.org/jira/browse/HBASE-25990 > Project: HBase > Issue Type: Task > Components: build >Reporter: Michael Stack >Priority: Major > Attachments: Screen Shot 2021-06-22 at 1.43.12 PM.png > > > This issue is for keeping notes on how to add a donated buildbot to our > apache build. > My employer donated budget (I badly under-estimated cost but whatever...). > This issue is about adding 5 GCP nodes. > There is this page up on apache on donating machines for build > https://infra.apache.org/hosting-external-agent.html It got me some of the > ways... at least as far as the bit about mailing root@a.o(nada). > At [~zhangduo]'s encouragement -- he has been this route already adding in > the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, > INFRA-21973. > I then reached out on slack and the gentleman Gavin MacDonald picked up the > task. > He told me run this script on all hosts after making edits (comment out line > #39 where we set hostname -- doesn't work): > https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh > (For more context on the above script and as a good backgrounder, see the > nice C* page on how to do this setup: > https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) > After doing the above, I had to do a visudo on each host to add a line for an > infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] anoopsjohn commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
anoopsjohn commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660539542 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java ## @@ -139,6 +140,31 @@ public static TableDescriptor tryUpdateAndGetMetaTableDescriptor(Configuration c } } + public static ColumnFamilyDescriptor getTableFamilyDesc( Review comment: This is a META table CF. Do we have any other util class or so (specific for META) where we can include this? At least the name should make it clear. This is actually table state CF in Meta. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1047,6 +1077,29 @@ private void finishActiveMasterInitialization(MonitoredTask status) // Set master as 'initialized'. setInitialized(true); +if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { Review comment: In case one is not there also, need to create? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -953,9 +956,26 @@ private void finishActiveMasterInitialization(MonitoredTask status) if (!waitForMetaOnline()) { return; } +TableDescriptor metaDescriptor = tableDescriptors.get( +TableName.META_TABLE_NAME); +final ColumnFamilyDescriptor tableFamilyDesc = metaDescriptor +.getColumnFamily(HConstants.TABLE_FAMILY); +final ColumnFamilyDescriptor replBarrierFamilyDesc = +metaDescriptor.getColumnFamily(HConstants.REPLICATION_BARRIER_FAMILY); + this.assignmentManager.joinCluster(); // The below depends on hbase:meta being online. -this.tableStateManager.start(); +try { + this.tableStateManager.start(); +} catch (NoSuchColumnFamilyException e) { + if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { Review comment: Seems 2.0.x had extra table state CF only. 2.1.x Had this replication barrier. When table state CF is missing it causes the startup issue right? When its 2.0.x to 2.3.x upgrade, the repBarrier will get auto created? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani opened a new pull request #3441: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
virajjasani opened a new pull request #3441: URL: https://github.com/apache/hbase/pull/3441 Signed-off-by: Michael Stack Forward-port PR: #3417 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
bharathv commented on pull request #3435: URL: https://github.com/apache/hbase/pull/3435#issuecomment-870760205 > Is it possible to provide 2 commits in this PR: I clubbed them into a single commit unfortunately but would've been better if I did them as separate commits for reviews. Most code should be around TableState stuff in ConnectionRegistry interface. Let me add some more test coverage for the interface methods before I commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25990) Add donated buildbots for jenkins
[ https://issues.apache.org/jira/browse/HBASE-25990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371526#comment-17371526 ] Sean Busbey commented on HBASE-25990: - to remove these hosts from ci-hadoop workload * moved all to label {{hbase-decomm}} to ensure no new schedule * aborted any in-progress builds on nodes * used jenkins UI to disconnect nodes via "temporarily make offline" safe to turn off now [~stack] > Add donated buildbots for jenkins > - > > Key: HBASE-25990 > URL: https://issues.apache.org/jira/browse/HBASE-25990 > Project: HBase > Issue Type: Task > Components: build >Reporter: Michael Stack >Priority: Major > Attachments: Screen Shot 2021-06-22 at 1.43.12 PM.png > > > This issue is for keeping notes on how to add a donated buildbot to our > apache build. > My employer donated budget (I badly under-estimated cost but whatever...). > This issue is about adding 5 GCP nodes. > There is this page up on apache on donating machines for build > https://infra.apache.org/hosting-external-agent.html It got me some of the > ways... at least as far as the bit about mailing root@a.o(nada). > At [~zhangduo]'s encouragement -- he has been this route already adding in > the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, > INFRA-21973. > I then reached out on slack and the gentleman Gavin MacDonald picked up the > task. > He told me run this script on all hosts after making edits (comment out line > #39 where we set hostname -- doesn't work): > https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh > (For more context on the above script and as a good backgrounder, see the > nice C* page on how to do this setup: > https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) > After doing the above, I had to do a visudo on each host to add a line for an > infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
Apache-HBase commented on pull request #3417: URL: https://github.com/apache/hbase/pull/3417#issuecomment-870753280 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 34s | branch-2.4 passed | | +1 :green_heart: | compile | 4m 15s | branch-2.4 passed | | +1 :green_heart: | checkstyle | 1m 44s | branch-2.4 passed | | +1 :green_heart: | spotbugs | 3m 11s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 17s | the patch passed | | +1 :green_heart: | compile | 4m 16s | the patch passed | | +1 :green_heart: | javac | 4m 16s | the patch passed | | +1 :green_heart: | checkstyle | 1m 39s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 13s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 52m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3417 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4ede9d6e48af 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 270b3facce | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3417/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-26036) DBB released too early and dirty data for checkAndMutate
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371519#comment-17371519 ] Michael Stack edited comment on HBASE-26036 at 6/29/21, 4:34 PM: - Excellent. How did you debug and find the skipped close the leak? Was the BYTEBUFF_ALLOCATOR_CLASS addition how you debugged? was (Author: stack): Excellent. How did you debug and find the skipped close the leak? > DBB released too early and dirty data for checkAndMutate > > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache-HBase commented on pull request #3430: URL: https://github.com/apache/hbase/pull/3430#issuecomment-870748067 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 1s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 46s | master passed | | +1 :green_heart: | compile | 4m 59s | master passed | | +1 :green_heart: | checkstyle | 1m 31s | master passed | | +1 :green_heart: | spotbugs | 6m 28s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 4m 59s | the patch passed | | +1 :green_heart: | cc | 4m 59s | the patch passed | | +1 :green_heart: | javac | 4m 59s | the patch passed | | +1 :green_heart: | checkstyle | 1m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 17s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 0s | the patch passed | | +1 :green_heart: | spotbugs | 7m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 38s | The patch does not generate ASF License warnings. | | | | 64m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3430 | | Optional Tests | dupname asflicense cc hbaseprotoc prototool javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5df1c0b109b4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26036) DBB released too early and dirty data for checkAndMutate
[ https://issues.apache.org/jira/browse/HBASE-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371519#comment-17371519 ] Michael Stack commented on HBASE-26036: --- Excellent. How did you debug and find the skipped close the leak? > DBB released too early and dirty data for checkAndMutate > > > Key: HBASE-26036 > URL: https://issues.apache.org/jira/browse/HBASE-26036 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Critical > > Before HBASE-25187, we found there are regionserver JVM crashing problems on > our production clusters, the coredump infos are as follows, > {code:java} > Stack: [0x7f621ba8d000,0x7f621bb8e000], sp=0x7f621bb8c0e0, free > space=1020k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > J 10829 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getTimestamp()J (9 > bytes) @ 0x7f6a5ee11b2d [0x7f6a5ee11ae0+0x4d] > J 22844 C2 > org.apache.hadoop.hbase.regionserver.HRegion.doCheckAndRowMutate([B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/client/RowMutations;Lorg/apache/hadoop/hbase/client/Mutation;Z)Z > (540 bytes) @ 0x7f6a60bed144 [0x7f6a60beb320+0x1e24] > J 17972 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.checkAndRowMutate(Lorg/apache/hadoop/hbase/regionserver/Region;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;[B[B[BLorg/apache/hadoop/hbase/filter/CompareFilter$CompareOp;Lorg/apache/hadoop/hbase/filter/ByteArrayComparable;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;)Z > (312 bytes) @ 0x7f6a5f4a7ed0 [0x7f6a5f4a6f40+0xf90] > J 26197 C2 > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiResponse; > (644 bytes) @ 0x7f6a61538b0c [0x7f6a61537940+0x11cc] > J 26332 C2 > org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHandler;)Lorg/apache/hadoop/hbase/util/Pair; > (566 bytes) @ 0x7f6a615e8228 [0x7f6a615e79c0+0x868] > J 20563 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1196 bytes) @ > 0x7f6a60711a4c [0x7f6a60711000+0xa4c] > J 19656% C2 > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(Ljava/util/concurrent/BlockingQueue;Ljava/util/concurrent/atomic/AtomicInteger;)V > (338 bytes) @ 0x7f6a6039a414 [0x7f6a6039a320+0xf4] > j org.apache.hadoop.hbase.ipc.RpcExecutor$1.run()V+24 > j java.lang.Thread.run()V+11 > v ~StubRoutines::call_stub > {code} > I have made a UT to reproduce this error, it can occur 100%。 > After HBASE-25187,the check result of the checkAndMutate will be false, > because it read wrong/dirty data from the released ByteBuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26031) Validate nightly builds run on new ci workers hbase11-hbase15
[ https://issues.apache.org/jira/browse/HBASE-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey resolved HBASE-26031. - Resolution: Later After chatting with Stack about donation internals, these hosts are getting decommed for now. > Validate nightly builds run on new ci workers hbase11-hbase15 > - > > Key: HBASE-26031 > URL: https://issues.apache.org/jira/browse/HBASE-26031 > Project: HBase > Issue Type: Task > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: image-2021-06-24-16-14-03-721.png > > > Per slack, asf infra has finished adding in nodes hbase10-hbase15 to > ci-hadoop. > make sure they can run nightly. > # Set labels for all these node to "hbase-staging" > # Push a feature branch off of current HEAD that updates the agent labels to > use "hbase-staging" > # trigger a bunch of runs. make sure *something* runs on each of the nodes > # Set labels for the nodes to "hbase" > # delete feature branch -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26031) Validate nightly builds run on new ci workers hbase11-hbase15
[ https://issues.apache.org/jira/browse/HBASE-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371512#comment-17371512 ] Hudson commented on HBASE-26031: Results for branch HBASE-26031 [build #7 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26031/7/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26031/7/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26031/7/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26031/7/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} -- Something went wrong with this stage, [check relevant console output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26031/7//console]. > Validate nightly builds run on new ci workers hbase11-hbase15 > - > > Key: HBASE-26031 > URL: https://issues.apache.org/jira/browse/HBASE-26031 > Project: HBase > Issue Type: Task > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: image-2021-06-24-16-14-03-721.png > > > Per slack, asf infra has finished adding in nodes hbase10-hbase15 to > ci-hadoop. > make sure they can run nightly. > # Set labels for all these node to "hbase-staging" > # Push a feature branch off of current HEAD that updates the agent labels to > use "hbase-staging" > # trigger a bunch of runs. make sure *something* runs on each of the nodes > # Set labels for the nodes to "hbase" > # delete feature branch -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3440: HBASE-26039 TestReplicationKillRS is useless after HBASE-23956
Apache-HBase commented on pull request #3440: URL: https://github.com/apache/hbase/pull/3440#issuecomment-870714561 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 22s | master passed | | +1 :green_heart: | compile | 1m 19s | master passed | | +1 :green_heart: | shadedjars | 9m 47s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 48s | the patch passed | | +1 :green_heart: | compile | 1m 14s | the patch passed | | +1 :green_heart: | javac | 1m 14s | the patch passed | | +1 :green_heart: | shadedjars | 9m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 201m 30s | hbase-server in the patch passed. | | | | 240m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3440 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux edb16ec5f968 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/testReport/ | | Max. process+thread count | 2330 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3425: HBASE-25991 Do compaction on compaction server
Apache-HBase commented on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-870713341 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 53s | HBASE-25714 passed | | +1 :green_heart: | compile | 1m 31s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 11m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 59s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 50s | the patch passed | | +1 :green_heart: | compile | 1m 34s | the patch passed | | +1 :green_heart: | javac | 1m 34s | the patch passed | | +1 :green_heart: | shadedjars | 10m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 150m 20s | hbase-server in the patch passed. | | | | 191m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3425 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 45b33a07eafa 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/testReport/ | | Max. process+thread count | 3926 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
saintstack commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r660746321 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionTask.java ## @@ -0,0 +1,173 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; Review comment: Sorry, yeah, meant this o.a.h.h.r.c package. I suppose you argue that you are modelling the layout above w/ a o.a.h.h.cs.compactions That makes sense. Let me resolve. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
saintstack commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r660745257 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionServerStorage.java ## @@ -0,0 +1,139 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; + +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +@InterfaceAudience.Private +/** + * since we do not maintain StoreFileManager in compaction server(can't refresh when flush). we use + * external storage(this class) to record compacting files, and initialize a new HStore in + * {@link CompactionThreadManager#selectCompaction} every time when request compaction Review comment: Good. If you do a new version of this PR, perhaps add some more explanatory text here. Otherwise, let me resolve this since you've clarified that RS does file movement still. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
saintstack commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r660744043 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CSRpcServices.java ## @@ -19,11 +19,15 @@ Review comment: It depends on hbase-server now? Given it is in it? But yeah, could be done at a later stage. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CSRpcServices.java ## @@ -88,8 +93,18 @@ void start() { public CompactResponse requestCompaction(RpcController controller, CompactionProtos.CompactRequest request) { requestCount.increment(); +ServerName rsServerName = ProtobufUtil.toServerName(request.getServer()); +RegionInfo regionInfo = ProtobufUtil.toRegionInfo(request.getRegionInfo()); +ColumnFamilyDescriptor cfd = ProtobufUtil.toColumnFamilyDescriptor(request.getFamily()); +boolean major = request.getMajor(); +int priority = request.getPriority(); +List favoredNodes = Collections.singletonList(request.getServer()); LOG.info("Receive compaction request from {}", ProtobufUtil.toString(request)); -compactionServer.compactionThreadManager.requestCompaction(); +CompactionTask compactionTask = + CompactionTask.newBuilder().setRsServerName(rsServerName).setRegionInfo(regionInfo) + .setColumnFamilyDescriptor(cfd).setRequestMajor(major).setPriority(priority) + .setFavoredNodes(favoredNodes).setSubmitTime(System.currentTimeMillis()).build(); +compactionServer.compactionThreadManager.requestCompaction(compactionTask); Review comment: Sweet! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3425: HBASE-25991 Do compaction on compaction server
Apache-HBase commented on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-870708724 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 39s | HBASE-25714 passed | | +1 :green_heart: | compile | 0m 59s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 1m 2s | the patch passed | | +1 :green_heart: | javac | 1m 2s | the patch passed | | +1 :green_heart: | shadedjars | 7m 42s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 156m 38s | hbase-server in the patch passed. | | | | 185m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3425 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 16f9da4d8ae2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/testReport/ | | Max. process+thread count | 4447 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25130) [branch-1] Masters in-memory serverHoldings map is not cleared during hbck repair
[ https://issues.apache.org/jira/browse/HBASE-25130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371488#comment-17371488 ] Hudson commented on HBASE-25130: Results for branch branch-1 [build #140 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/140/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/140//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/140//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/140//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > [branch-1] Masters in-memory serverHoldings map is not cleared during hbck > repair > - > > Key: HBASE-25130 > URL: https://issues.apache.org/jira/browse/HBASE-25130 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Victor Li >Priority: Major > Fix For: 1.7.1 > > > {color:#1d1c1d}Incase of repairing overlaps, hbck essentially calls the > closeRegion RPC on RS followed by offline RPC on Master to offline all the > overlap regions that would be merged into a new region. {color} > {color:#1d1c1d}However the offline RPC doesn’t remove it from the > serverHoldings map unless the new state is MERGED/SPLIT > ([https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java#L719]) > b{color}{color:#1d1c1d}ut the new state in this case is OFFLINE. {color} > {color:#1d1c1d}This is actually intended to match with the META entries and > would be removed later when the region is online on a different server. > However, in our case , the region would never be online on a new server, > hence the region info is never cleared from the map that is used by balancer > and SCP for incorrect reeassignment.{color} > {color:#1d1c1d}We might need to tackle this by removing the entries from the > map when hbck actually deletes{color}{color:#1d1c1d} the meta entries for > this region which kind of matches the in-memory map’s expectation with the > META state.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3440: HBASE-26039 TestReplicationKillRS is useless after HBASE-23956
Apache-HBase commented on pull request #3440: URL: https://github.com/apache/hbase/pull/3440#issuecomment-870696058 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 4s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 56s | master passed | | +1 :green_heart: | compile | 1m 18s | master passed | | +1 :green_heart: | shadedjars | 9m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 42s | the patch passed | | +1 :green_heart: | compile | 1m 19s | the patch passed | | +1 :green_heart: | javac | 1m 19s | the patch passed | | +1 :green_heart: | shadedjars | 9m 4s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 182m 37s | hbase-server in the patch passed. | | | | 217m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3440 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 25d32d4f6dc0 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/testReport/ | | Max. process+thread count | 2398 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26031) Validate nightly builds run on new ci workers hbase11-hbase15
[ https://issues.apache.org/jira/browse/HBASE-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371472#comment-17371472 ] Sean Busbey commented on HBASE-26031: - I am of two minds. On the one hand, checking on how builds that touched {{hbase11}} last night is basically not doable. The Jenkins per-node view doesn't include DSL pipeline steps. So according to it nothing ran basically anywhere. The chart of busy/free executors shows that some work _did_ happen on {{hbase11}}. So I'm thinking we should just throw the nodes into the mix and then come up with some other way to monitor ci worker health. On the other hand of the 4 test stages that need to do a maven build only 1 didn't get some kind of connection problem talking to maven central. I moved hbase12 over to the general pool and started another branch build on the remaining nodes. > Validate nightly builds run on new ci workers hbase11-hbase15 > - > > Key: HBASE-26031 > URL: https://issues.apache.org/jira/browse/HBASE-26031 > Project: HBase > Issue Type: Task > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: image-2021-06-24-16-14-03-721.png > > > Per slack, asf infra has finished adding in nodes hbase10-hbase15 to > ci-hadoop. > make sure they can run nightly. > # Set labels for all these node to "hbase-staging" > # Push a feature branch off of current HEAD that updates the agent labels to > use "hbase-staging" > # trigger a bunch of runs. make sure *something* runs on each of the nodes > # Set labels for the nodes to "hbase" > # delete feature branch -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3436: HBASE-26036 ByteBuff released too early and dirty data for checkAndMu…
Apache-HBase commented on pull request #3436: URL: https://github.com/apache/hbase/pull/3436#issuecomment-870643454 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 41s | master passed | | +1 :green_heart: | compile | 1m 25s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | -1 :x: | shadedjars | 6m 39s | patch has 10 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 140m 27s | hbase-server in the patch passed. | | | | 172m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3436 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f1d492949886 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/2/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/2/testReport/ | | Max. process+thread count | 3221 (vs. ulimit of 3) | | modules | C: hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3436/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3439: HBASE-22923 min version of RegionServer to move system table regions
Apache-HBase commented on pull request #3439: URL: https://github.com/apache/hbase/pull/3439#issuecomment-870633281 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 44s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 59s | master passed | | +1 :green_heart: | compile | 1m 18s | master passed | | +1 :green_heart: | shadedjars | 9m 8s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | the patch passed | | +1 :green_heart: | compile | 1m 18s | the patch passed | | +1 :green_heart: | javac | 1m 18s | the patch passed | | +1 :green_heart: | shadedjars | 9m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 181m 13s | hbase-server in the patch passed. | | | | 222m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3439/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3439 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b7204665e513 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3439/1/testReport/ | | Max. process+thread count | 2495 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3439/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3438: HBASE-22923 min version of RegionServer to move system table regions
Apache-HBase commented on pull request #3438: URL: https://github.com/apache/hbase/pull/3438#issuecomment-870622358 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 11m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 40s | branch-1 passed | | +1 :green_heart: | compile | 0m 40s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 0m 44s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 1m 49s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 5s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 41s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 3m 5s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 2s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 50s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | compile | 0m 44s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | javac | 0m 44s | the patch passed | | -1 :x: | checkstyle | 1m 37s | hbase-server: The patch generated 4 new + 228 unchanged - 0 fixed = 232 total (was 228) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 26s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 32s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 41s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 2m 55s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 174m 49s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 40s | The patch does not generate ASF License warnings. | | | | 227m 24s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint | | | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3438/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3438 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 629130e8e8bc 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3438/out/precommit/personality/provided.sh | | git revision | branch-1 / 395eb0c | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3438/1/artifact/out/diff-checkstyle-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3438/1/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3438/1/testReport/ | | Max. process+thread count | 4214 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console
[GitHub] [hbase] Apache9 commented on pull request #3421: HBASE-26026 HBase Write may be stuck forever when using CompactingMem…
Apache9 commented on pull request #3421: URL: https://github.com/apache/hbase/pull/3421#issuecomment-870441222 I need sometime to understand the code better... A bit busy recently, sorry... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] nyl3532016 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r660495018 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CSRpcServices.java ## @@ -88,8 +93,18 @@ void start() { public CompactResponse requestCompaction(RpcController controller, CompactionProtos.CompactRequest request) { requestCount.increment(); +ServerName rsServerName = ProtobufUtil.toServerName(request.getServer()); +RegionInfo regionInfo = ProtobufUtil.toRegionInfo(request.getRegionInfo()); +ColumnFamilyDescriptor cfd = ProtobufUtil.toColumnFamilyDescriptor(request.getFamily()); +boolean major = request.getMajor(); +int priority = request.getPriority(); +List favoredNodes = Collections.singletonList(request.getServer()); LOG.info("Receive compaction request from {}", ProtobufUtil.toString(request)); -compactionServer.compactionThreadManager.requestCompaction(); +CompactionTask compactionTask = + CompactionTask.newBuilder().setRsServerName(rsServerName).setRegionInfo(regionInfo) + .setColumnFamilyDescriptor(cfd).setRequestMajor(major).setPriority(priority) + .setFavoredNodes(favoredNodes).setSubmitTime(System.currentTimeMillis()).build(); +compactionServer.compactionThreadManager.requestCompaction(compactionTask); Review comment: Yes, in our internal version, We have already implemented macro-level throttling ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionTask.java ## @@ -0,0 +1,173 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; Review comment: I only find `org.apache.hadoop.hbase.regionserver.compactions` package ? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionServerStorage.java ## @@ -0,0 +1,139 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; + +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +@InterfaceAudience.Private +/** + * since we do not maintain StoreFileManager in compaction server(can't refresh when flush). we use + * external storage(this class) to record compacting files, and initialize a new HStore in + * {@link CompactionThreadManager#selectCompaction} every time when request compaction + */ +class CompactionServerStorage { + private static Logger LOG = LoggerFactory.getLogger(CompactionServerStorage.class); + private final ConcurrentMap>> selectedFiles = + new ConcurrentHashMap<>(); + private final ConcurrentMap>> compactedFiles = + new ConcurrentHashMap<>(); + /** + * Mark files as completed, called after CS finished compaction and RS accepted the results of + * this compaction, these compacted files will be deleted by RS if no reader referenced to them. + */ + boolean addCompactedFiles(RegionInfo regionInfo, ColumnFamilyDescriptor cfd, + List compactedFiles)
[GitHub] [hbase] Apache-HBase commented on pull request #3436: HBASE-26036 ByteBuff released too early and dirty data for checkAndMu…
Apache-HBase commented on pull request #3436: URL: https://github.com/apache/hbase/pull/3436#issuecomment-870436602 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3425: HBASE-25991 Do compaction on compaction server
Apache-HBase commented on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-870606945 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 17s | HBASE-25714 passed | | +1 :green_heart: | compile | 3m 43s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 1m 18s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 2m 28s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 39s | the patch passed | | +1 :green_heart: | compile | 3m 33s | the patch passed | | +1 :green_heart: | javac | 3m 33s | the patch passed | | -0 :warning: | checkstyle | 1m 17s | hbase-server: The patch generated 1 new + 91 unchanged - 10 fixed = 92 total (was 101) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 25m 4s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 62m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3425 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 6bfc283f4160 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/6/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3435: HBASE-26021: Undo the incompatible serialization change in HBASE-7767
Apache-HBase commented on pull request #3435: URL: https://github.com/apache/hbase/pull/3435#issuecomment-870411644 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 7m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 1s | The patch appears to include 25 new or modified test files. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 2m 29s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 8m 9s | branch-1 passed | | +1 :green_heart: | compile | 1m 50s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 2m 2s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 6m 29s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 9s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 33s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 37s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 2m 47s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 28s | branch-1 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 58s | the patch passed | | +1 :green_heart: | compile | 1m 49s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | cc | 1m 49s | the patch passed | | +1 :green_heart: | javac | 1m 49s | the patch passed | | +1 :green_heart: | compile | 2m 3s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | cc | 2m 3s | the patch passed | | +1 :green_heart: | javac | 2m 3s | the patch passed | | -1 :x: | checkstyle | 0m 45s | hbase-client: The patch generated 21 new + 446 unchanged - 2 fixed = 467 total (was 448) | | -1 :x: | checkstyle | 2m 13s | hbase-server: The patch generated 53 new + 1841 unchanged - 92 fixed = 1894 total (was 1933) | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 :x: | whitespace | 0m 0s | The patch 1 line(s) with tabs. | | +1 :green_heart: | shadedjars | 3m 2s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 41s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | hbaseprotoc | 2m 14s | the patch passed | | +1 :green_heart: | javadoc | 1m 19s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 1m 40s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 7m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 28s | hbase-protocol in the patch passed. | | +1 :green_heart: | unit | 2m 40s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 143m 13s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 11m 52s | hbase-rsgroup in the patch passed. | | +1 :green_heart: | asflicense | 1m 47s | The patch does not generate ASF License warnings. | | | | 241m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3435/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3435 | | JIRA Issue | HBASE-26021 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux bf5e358b5983 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3435/out/precommit/personality/provided.sh | | git revision | branch-1 / 395eb0c | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions |
[GitHub] [hbase] Apache-HBase commented on pull request #3439: HBASE-22923 min version of RegionServer to move system table regions
Apache-HBase commented on pull request #3439: URL: https://github.com/apache/hbase/pull/3439#issuecomment-870506544 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] jojochuang merged pull request #3427: HBASE-23817 The message "Please make sure that backup is enabled on the cluster." is shown even when the backup feature is enabled
jojochuang merged pull request #3427: URL: https://github.com/apache/hbase/pull/3427 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3417: HBASE-25902 Add missing CFs in meta during HBase 1 to 2 Upgrade
saintstack commented on a change in pull request #3417: URL: https://github.com/apache/hbase/pull/3417#discussion_r660162631 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1047,6 +1074,29 @@ private void finishActiveMasterInitialization(MonitoredTask status) // Set master as 'initialized'. setInitialized(true); +if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { + // create missing CFs in meta table after master is set to 'initialized'. + createMissingCFsInMetaDuringUpgrade(metaDescriptor); + + // Throwing this Exception to abort active master is painful but this + // seems the only way to add missing CFs in meta while upgrading from + // HBase 1 to 2 (where HBase 2 has HBASE-23055 & HBASE-23782 checked-in). + // So, why do we abort active master after adding missing CFs in meta? + // When we reach here, we would have already bypassed NoSuchColumnFamilyException + // in initClusterSchemaService(), meaning ClusterSchemaService is not + // correctly initialized but we bypassed it. Similarly, we bypassed + // tableStateManager.start() as well. Hence, we should better abort + // current active master because our main task - adding missing CFs + // in meta table is done (possible only after master state is set as + // initialized) at the expense of bypassing few important tasks as part + // of active master init routine. So now we abort active master so that + // next active master init will not face any issues and all mandatory + // services will be started during master init phase. Review comment: Nice note. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1047,6 +1074,29 @@ private void finishActiveMasterInitialization(MonitoredTask status) // Set master as 'initialized'. setInitialized(true); +if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { + // create missing CFs in meta table after master is set to 'initialized'. + createMissingCFsInMetaDuringUpgrade(metaDescriptor); + + // Throwing this Exception to abort active master is painful but this + // seems the only way to add missing CFs in meta while upgrading from + // HBase 1 to 2 (where HBase 2 has HBASE-23055 & HBASE-23782 checked-in). + // So, why do we abort active master after adding missing CFs in meta? + // When we reach here, we would have already bypassed NoSuchColumnFamilyException + // in initClusterSchemaService(), meaning ClusterSchemaService is not + // correctly initialized but we bypassed it. Similarly, we bypassed + // tableStateManager.start() as well. Hence, we should better abort + // current active master because our main task - adding missing CFs + // in meta table is done (possible only after master state is set as + // initialized) at the expense of bypassing few important tasks as part + // of active master init routine. So now we abort active master so that + // next active master init will not face any issues and all mandatory + // services will be started during master init phase. + throw new IOException("Stopping active master after missing CFs are " Review comment: We are not stopping, we are aborting? Would it be better to throw a PleaseRestartMeException here? (Would have to create it...). ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -953,9 +955,25 @@ private void finishActiveMasterInitialization(MonitoredTask status) if (!waitForMetaOnline()) { return; } +TableDescriptor metaDescriptor = tableDescriptors.get( +TableName.META_TABLE_NAME); +final ColumnFamilyDescriptor tableFamilyDesc = metaDescriptor +.getColumnFamily(HConstants.TABLE_FAMILY); +final ColumnFamilyDescriptor replBarrierFamilyDesc = +metaDescriptor.getColumnFamily(HConstants.REPLICATION_BARRIER_FAMILY); + this.assignmentManager.joinCluster(); // The below depends on hbase:meta being online. -this.tableStateManager.start(); +try { + this.tableStateManager.start(); +} catch (NoSuchColumnFamilyException e) { + if (tableFamilyDesc == null && replBarrierFamilyDesc == null) { +LOG.info("For missing CFs in meta, this Exception is expected", e); Review comment: You think operator will know what 'missing CFs in meta' is about? Would it be better to talk about migration from hbase1 to hbase2? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1106,6 +1156,38 @@ private void finishActiveMasterInitialization(MonitoredTask status) } } + private void createMissingCFsInMetaDuringUpgrade( + TableDescriptor metaDescriptor) throws IOException { +TableDescriptor
[GitHub] [hbase] Apache9 commented on a change in pull request #3434: HBASE-21946 Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
Apache9 commented on a change in pull request #3434: URL: https://github.com/apache/hbase/pull/3434#discussion_r660287517 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/BlockIOUtils.java ## @@ -207,6 +234,17 @@ public static boolean readWithExtra(ByteBuff buf, FSDataInputStream dis, int nec */ public static boolean preadWithExtra(ByteBuff buff, FSDataInputStream dis, long position, int necessaryLen, int extraLen) throws IOException { +boolean preadbytebuffer = dis.hasCapability("in:preadbytebuffer"); Review comment: We need to test it every time? Is it possible to test once when the FSDataInputStream is created? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.
saintstack commented on a change in pull request #3389: URL: https://github.com/apache/hbase/pull/3389#discussion_r660167381 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java ## @@ -291,12 +291,36 @@ public void setCacheDataOnWrite(boolean cacheDataOnWrite) { * cacheIndexesOnWrite * cacheBloomsOnWrite */ - public void enableCacheOnWrite() { + public void enableCacheOnWriteForCompactions() { this.cacheDataOnWrite = true; this.cacheIndexesOnWrite = true; this.cacheBloomsOnWrite = true; } + /** + * If hbase.rs.cachecompactedblocksonwrite configuration is set to true and + * 'totalCompactedFilesSize' is lower than 'cacheCompactedDataOnWriteThreshold', + * enables cache on write for below properties: + * - cacheDataOnWrite + * - cacheIndexesOnWrite + * - cacheBloomsOnWrite + * + * Otherwise, sets 'cacheDataOnWrite' only to false. + * + * @param totalCompactedFilesSize the total size of compacted files. + * @return true if the checks mentioned above pass and the cache is enabled, false otherwise. + */ + public boolean enableCacheOnWriteForCompactions(long totalCompactedFilesSize) { Review comment: Why the threshold check? In description it talks of cacheCompactedDataOnWriteThreshold but the method we call is getCacheCompactedBlocksOnWriteThreshold (and config is cachecompactedblocksonwrite) and we check the return against totalCompactedFilesSize. Its a little confusing on what is being compared here. Any chance of some method renames or method name alignment w/ configs? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java ## @@ -533,4 +550,46 @@ protected InternalScanner createScanner(HStore store, ScanInfo scanInfo, return new StoreScanner(store, scanInfo, scanners, smallestReadPoint, earliestPutTs, dropDeletesFromRow, dropDeletesToRow); } + + /** + * Default implementation for committing store files created after a compaction. Assumes new files + * had been created on a temp directory, so it renames those files into the actual store dir, + * then create a reader and cache it into the store. + * @param cr the compaction request. + * @param newFiles the new files created by this compaction under a temp dir. + * @param user the running user. + * @return A list of the resulting store files already placed in the store dir and loaded into the + * store cache. + * @throws IOException if the commit fails. + */ + public List commitCompaction(CompactionRequestImpl cr, List newFiles, User user) + throws IOException { +List sfs = new ArrayList<>(newFiles.size()); +for (Path newFile : newFiles) { + assert newFile != null; + this.store.validateStoreFile(newFile); + // Move the file into the right spot + HStoreFile sf = createFileInStoreDir(newFile); + if (this.store.getCoprocessorHost() != null) { +this.store.getCoprocessorHost().postCompact(this.store, sf, cr.getTracker(), cr, user); + } + assert sf != null; + sfs.add(sf); +} +return sfs; + } + + /** + * Assumes new file was created initially on a temp folder. Review comment: Don't we want the Compactor asking the StoreEngine to do the writing for us? Rather than doing it here? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectStoreCompactor.java ## @@ -0,0 +1,85 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.regionserver.compactions; + +import java.io.IOException; +import java.net.InetSocketAddress; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.io.compress.Compression; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.io.hfile.HFileContext; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.StoreFileWriter; +import org.apache.yetus.audience.InterfaceAudience; +
[GitHub] [hbase] Apache-HBase commented on pull request #3434: HBASE-21946 Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
Apache-HBase commented on pull request #3434: URL: https://github.com/apache/hbase/pull/3434#issuecomment-870201695 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3437: HBASE-26037 Implement namespace and table level access control for thrift & thrift2
Apache-HBase commented on pull request #3437: URL: https://github.com/apache/hbase/pull/3437#issuecomment-870461872 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3440: HBASE-26039 TestReplicationKillRS is useless after HBASE-23956
Apache-HBase commented on pull request #3440: URL: https://github.com/apache/hbase/pull/3440#issuecomment-870560755 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 53s | master passed | | +1 :green_heart: | compile | 3m 13s | master passed | | +1 :green_heart: | checkstyle | 1m 5s | master passed | | +1 :green_heart: | spotbugs | 2m 3s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 3m 7s | the patch passed | | +1 :green_heart: | javac | 3m 7s | the patch passed | | +1 :green_heart: | checkstyle | 1m 3s | hbase-server: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 8s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 46m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3440 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 963370e11fce 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 79659d8e66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3440/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] jojochuang commented on a change in pull request #3434: HBASE-21946 Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
jojochuang commented on a change in pull request #3434: URL: https://github.com/apache/hbase/pull/3434#discussion_r660361583 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/BlockIOUtils.java ## @@ -207,6 +234,17 @@ public static boolean readWithExtra(ByteBuff buf, FSDataInputStream dis, int nec */ public static boolean preadWithExtra(ByteBuff buff, FSDataInputStream dis, long position, int necessaryLen, int extraLen) throws IOException { +boolean preadbytebuffer = dis.hasCapability("in:preadbytebuffer"); Review comment: I suppose the overhead is relatively low. I had thought about moving to where the input stream is created, but it would require pretty extensive refactoring. Internally, hasCapability() uses switch statement to compare the capability string and it should be quite efficient. I can explore this suggestion later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] comnetwork commented on pull request #3421: HBASE-26026 HBase Write may be stuck forever when using CompactingMem…
comnetwork commented on pull request #3421: URL: https://github.com/apache/hbase/pull/3421#issuecomment-870262769 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ddupg commented on a change in pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
ddupg commented on a change in pull request #3430: URL: https://github.com/apache/hbase/pull/3430#discussion_r660247427 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java ## @@ -838,185 +795,116 @@ public void postLogRoll(Path newLog) throws IOException { } } - @Override - public void regionServerRemoved(ServerName regionserver) { -transferQueues(regionserver); - } - - /** - * Transfer all the queues of the specified to this region server. First it tries to grab a lock - * and if it works it will move the old queues and finally will delete the old queues. - * - * It creates one old source for any type of source of the old rs. - */ - private void transferQueues(ServerName deadRS) { -if (server.getServerName().equals(deadRS)) { - // it's just us, give up + void claimQueue(ServerName deadRS, String queue) { +// Wait a bit before transferring the queues, we may be shutting down. +// This sleep may not be enough in some cases. +try { + Thread.sleep(sleepBeforeFailover + +(long) (ThreadLocalRandom.current().nextFloat() * sleepBeforeFailover)); +} catch (InterruptedException e) { + LOG.warn("Interrupted while waiting before transferring a queue."); + Thread.currentThread().interrupt(); +} +// We try to lock that rs' queue directory +if (server.isStopped()) { + LOG.info("Not transferring queue since we are shutting down"); + return; +} +// After claim the queues from dead region server, wewill skip to start the Review comment: NIT: `wewill` miss a space. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ClaimReplicationQueuesProcedure.java ## @@ -0,0 +1,140 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master.replication; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv; +import org.apache.hadoop.hbase.master.procedure.ServerProcedureInterface; +import org.apache.hadoop.hbase.procedure2.Procedure; +import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; +import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +import org.apache.hadoop.hbase.procedure2.ProcedureUtil; +import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +import org.apache.hadoop.hbase.replication.ReplicationException; +import org.apache.hadoop.hbase.replication.ReplicationQueueStorage; +import org.apache.hadoop.hbase.util.RetryCounter; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ClaimReplicationQueuesStateData; +import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos; + +/** + * Used to assign the replication queues of a dead server to other region servers. + */ +@InterfaceAudience.Private +public class ClaimReplicationQueuesProcedure extends Procedure + implements ServerProcedureInterface { + + private static final Logger LOG = LoggerFactory.getLogger(ClaimReplicationQueuesProcedure.class); + + private ServerName crashedServer; + + private RetryCounter retryCounter; + + public ClaimReplicationQueuesProcedure() { + } + + public ClaimReplicationQueuesProcedure(ServerName crashedServer) { +this.crashedServer = crashedServer; + } + + @Override + public ServerName getServerName() { +return crashedServer; + } + + @Override + public boolean hasMetaTableRegion() { +return false; + } + + @Override + public ServerOperationType getServerOperationType() { +return ServerOperationType.CLAIM_REPLICATION_QUEUES; + } + + @Override + protected Procedure[] execute(MasterProcedureEnv env) +throws ProcedureYieldException, ProcedureSuspendedException, InterruptedException { +ReplicationQueueStorage storage =
[GitHub] [hbase] Apache-HBase commented on pull request #3433: HBASE-26035 Redundant null check in the compareTo function
Apache-HBase commented on pull request #3433: URL: https://github.com/apache/hbase/pull/3433#issuecomment-870121257 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache9 commented on a change in pull request #3430: URL: https://github.com/apache/hbase/pull/3430#discussion_r660286508 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/ClaimReplicationQueuesProcedure.java ## @@ -0,0 +1,140 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master.replication; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv; +import org.apache.hadoop.hbase.master.procedure.ServerProcedureInterface; +import org.apache.hadoop.hbase.procedure2.Procedure; +import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; +import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +import org.apache.hadoop.hbase.procedure2.ProcedureUtil; +import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +import org.apache.hadoop.hbase.replication.ReplicationException; +import org.apache.hadoop.hbase.replication.ReplicationQueueStorage; +import org.apache.hadoop.hbase.util.RetryCounter; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ClaimReplicationQueuesStateData; +import org.apache.hadoop.hbase.shaded.protobuf.generated.ProcedureProtos; + +/** + * Used to assign the replication queues of a dead server to other region servers. + */ +@InterfaceAudience.Private +public class ClaimReplicationQueuesProcedure extends Procedure + implements ServerProcedureInterface { + + private static final Logger LOG = LoggerFactory.getLogger(ClaimReplicationQueuesProcedure.class); + + private ServerName crashedServer; + + private RetryCounter retryCounter; + + public ClaimReplicationQueuesProcedure() { + } + + public ClaimReplicationQueuesProcedure(ServerName crashedServer) { +this.crashedServer = crashedServer; + } + + @Override + public ServerName getServerName() { +return crashedServer; + } + + @Override + public boolean hasMetaTableRegion() { +return false; + } + + @Override + public ServerOperationType getServerOperationType() { +return ServerOperationType.CLAIM_REPLICATION_QUEUES; + } + + @Override + protected Procedure[] execute(MasterProcedureEnv env) +throws ProcedureYieldException, ProcedureSuspendedException, InterruptedException { +ReplicationQueueStorage storage = env.getReplicationPeerManager().getQueueStorage(); +try { + List queues = storage.getAllQueues(crashedServer); + if (queues.isEmpty()) { +LOG.debug("Finish claiming replication queues for {}", crashedServer); +storage.removeReplicatorIfQueueIsEmpty(crashedServer); +// we are done +return null; + } + LOG.debug("There are {} replication queues need to be claimed for {}", queues.size(), +crashedServer); + List targetServers = +env.getMasterServices().getServerManager().getOnlineServersList(); + if (targetServers.isEmpty()) { +throw new ReplicationException("no region server available"); + } + Collections.shuffle(targetServers); + ClaimReplicationQueueRemoteProcedure[] procs = +new ClaimReplicationQueueRemoteProcedure[Math.min(queues.size(), targetServers.size())]; + for (int i = 0; i < procs.length; i++) { +procs[i] = new ClaimReplicationQueueRemoteProcedure(crashedServer, queues.get(i), Review comment: It will be scheduled next time. The exit condition is there is no replication queue for the given dead region server. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell merged pull request #3402: HBASE-25130 - Fix master in-memory server holding map after:
apurtell merged pull request #3402: URL: https://github.com/apache/hbase/pull/3402 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
saintstack commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r659933917 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CSRpcServices.java ## @@ -88,8 +93,18 @@ void start() { public CompactResponse requestCompaction(RpcController controller, CompactionProtos.CompactRequest request) { requestCount.increment(); +ServerName rsServerName = ProtobufUtil.toServerName(request.getServer()); +RegionInfo regionInfo = ProtobufUtil.toRegionInfo(request.getRegionInfo()); +ColumnFamilyDescriptor cfd = ProtobufUtil.toColumnFamilyDescriptor(request.getFamily()); +boolean major = request.getMajor(); +int priority = request.getPriority(); +List favoredNodes = Collections.singletonList(request.getServer()); LOG.info("Receive compaction request from {}", ProtobufUtil.toString(request)); -compactionServer.compactionThreadManager.requestCompaction(); +CompactionTask compactionTask = + CompactionTask.newBuilder().setRsServerName(rsServerName).setRegionInfo(regionInfo) + .setColumnFamilyDescriptor(cfd).setRequestMajor(major).setPriority(priority) + .setFavoredNodes(favoredNodes).setSubmitTime(System.currentTimeMillis()).build(); +compactionServer.compactionThreadManager.requestCompaction(compactionTask); Review comment: So 'throttling' is done via the current compaction throttling mechanism it seems. Good. Maybe later Master would have an overall view on compactions and do its own macro-level throttling... ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactThreadControl.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import org.apache.hadoop.hbase.regionserver.throttle.CompactionThroughputControllerFactory; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.StealJobQueue; +import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; +import org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Comparator; +import java.util.Iterator; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.RejectedExecutionHandler; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.function.BiConsumer; + +@InterfaceAudience.Private +public class CompactThreadControl { Review comment: What does a CompactThreadControl do? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionServerStorage.java ## @@ -0,0 +1,139 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; + +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import
[GitHub] [hbase] nyl3532016 edited a comment on pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 edited a comment on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-869608162 > Please add more javadoc for CompactionServer related class on how to share the same logic with HRegionServer? I do not think we want to duplicate the code twice... `CompactionThreadManager` reuse `HStore.selectCompaction`,`HStore.throttleCompaction`,`CompactionContext.compact` of `HRegionserver`, which are core logic of compaction. Yes,we have a few duplicate code, just like compaction thread pool and CompactionRunner. I will try to eliminate thread pool duplicate code -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org