FrankChen021 opened a new pull request #10955: URL: https://github.com/apache/druid/pull/10955
This PR improves IT to reduce excessive stack trace messages when waiting Druid nodes to start up. ### Description When running IT, Druid nodes are started up in K8S environment, and IT test cases will wait for Druid nodes to start up by checking their HTTP endpoint '/status/health' periodically. In some cases, nodes in K8S environment take some time to start up, and this causes the periodical check mentioned above outputs a lot of disturbing and very long stack trace messages shown as below ``` 2021-03-08T04:18:13,937 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[0]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T04:18:18,945 WARN [HttpClient-Netty-Worker-20] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T04:18:18,946 ERROR [main] org.apache.druid.testing.utils.DruidClusterAdminClient - java.util.concurrent.ExecutionException: org.jboss.netty.channel.ChannelException: Channel disconnected at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?] at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?] at org.apache.druid.testing.utils.DruidClusterAdminClient.lambda$waitUntilInstanceReady$2(DruidClusterAdminClient.java:182) ~[druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.apache.druid.testing.utils.ITRetryUtil.retryUntil(ITRetryUtil.java:57) [druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.apache.druid.testing.utils.ITRetryUtil.retryUntilTrue(ITRetryUtil.java:39) [druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.apache.druid.testing.utils.DruidClusterAdminClient.waitUntilInstanceReady(DruidClusterAdminClient.java:176) [druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.apache.druid.testing.utils.DruidClusterAdminClient.waitUntilCoordinatorReady(DruidClusterAdminClient.java:116) [druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.apache.druid.testing.utils.SuiteListener.onStart(SuiteListener.java:41) [druid-integration-tests-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.testng.DruidTestRunnerFactory$DruidTestRunner.run(DruidTestRunnerFactory.java:87) [druid-integration-tests-0.22.0-SNAPSHOT.jar:?] at org.testng.SuiteRunner.runTest(SuiteRunner.java:455) [testng-6.14.3.jar:?] at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:450) [testng-6.14.3.jar:?] at org.testng.SuiteRunner.privateRun(SuiteRunner.java:415) [testng-6.14.3.jar:?] at org.testng.SuiteRunner.run(SuiteRunner.java:364) [testng-6.14.3.jar:?] at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52) [testng-6.14.3.jar:?] at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:84) [testng-6.14.3.jar:?] at org.testng.TestNG.runSuitesSequentially(TestNG.java:1208) [testng-6.14.3.jar:?] at org.testng.TestNG.runSuitesLocally(TestNG.java:1137) [testng-6.14.3.jar:?] at org.testng.TestNG.runSuites(TestNG.java:1049) [testng-6.14.3.jar:?] at org.testng.TestNG.run(TestNG.java:1017) [testng-6.14.3.jar:?] at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:135) [surefire-testng-2.22.0.jar:2.22.0] at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeSingleClass(TestNGDirectoryTestSuite.java:112) [surefire-testng-2.22.0.jar:2.22.0] at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:99) [surefire-testng-2.22.0.jar:2.22.0] at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:146) [surefire-testng-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417) [surefire-booter-2.22.0.jar:2.22.0] Caused by: org.jboss.netty.channel.ChannelException: Channel disconnected at org.apache.druid.java.util.http.client.NettyHttpClient$1.channelDisconnected(NettyHttpClient.java:351) ~[druid-core-0.22.0-SNAPSHOT.jar:0.22.0-SNAPSHOT] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.cleanup(ReplayingDecoder.java:570) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:365) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.6.Final.jar:?] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.6.Final.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?] at java.lang.Thread.run(Thread.java:832) ~[?:?] 2021-03-08T04:18:18,948 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[1]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms ``` This kind of exception during this phase is foreseeable, and the stack trace message are helpless. So this PR improves the exception handling in the periodical check to suppress stack trace messages for specific exceptions. With this PR, the log will be much more cleaner as below ``` 2021-03-08T05:16:58,581 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[0]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:03,591 WARN [HttpClient-Netty-Worker-20] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:03,592 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[1]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:08,599 WARN [HttpClient-Netty-Worker-21] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:08,600 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[2]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:13,604 WARN [HttpClient-Netty-Worker-22] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:13,604 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[3]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:18,609 WARN [HttpClient-Netty-Worker-23] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:18,610 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[4]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:23,614 WARN [HttpClient-Netty-Worker-24] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:23,615 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[5]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:28,623 WARN [HttpClient-Netty-Worker-25] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:28,624 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[6]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:33,630 WARN [HttpClient-Netty-Worker-26] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:33,631 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[7]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:38,635 WARN [HttpClient-Netty-Worker-27] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:38,636 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[8]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:43,644 WARN [HttpClient-Netty-Worker-28] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:43,645 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[9]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:48,650 WARN [HttpClient-Netty-Worker-29] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:48,650 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[10]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:53,656 WARN [HttpClient-Netty-Worker-30] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:53,657 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[11]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:17:58,664 WARN [HttpClient-Netty-Worker-31] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:17:58,665 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[12]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:18:03,675 WARN [HttpClient-Netty-Worker-32] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:18:03,675 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[13]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:18:08,681 WARN [HttpClient-Netty-Worker-33] org.apache.druid.java.util.http.client.NettyHttpClient - [GET http://127.0.0.1:8081/status/health] Channel disconnected before response complete 2021-03-08T05:18:08,682 INFO [main] org.apache.druid.testing.utils.ITRetryUtil - Attempt[14]: Task Waiting for instance to be ready: [http://127.0.0.1:8081] still not complete. Next retry in 5000 ms 2021-03-08T05:18:15,106 INFO [main] org.apache.druid.testing.utils.DruidClusterAdminClient - 200 OK true 2021-03-08T05:18:16,266 INFO [main] org.apache.druid.testing.utils.DruidClusterAdminClient - 200 OK ``` <hr> <!-- Check the items by putting "x" in the brackets for the done things. Not all of these items apply to every PR. Remove the items which are not done or not relevant to the PR. None of the items from the checklist below are strictly necessary, but it would be very helpful if you at least self-review the PR. --> This PR has: - [X] been self-reviewed. - [ ] using the [concurrency checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md) (Remove this item if the PR doesn't have any relation to concurrency.) - [ ] added documentation for new or modified features or behaviors. - [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links. - [ ] added or updated version, license, or notice information in [licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md) - [X] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader. - [ ] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for [code coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md) is met. - [ ] added integration tests. - [ ] been tested in a test Druid cluster. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
