jiajunwang commented on a change in pull request #1753:
URL: https://github.com/apache/helix/pull/1753#discussion_r639285735



##########
File path: 
helix-core/src/test/java/org/apache/helix/monitoring/TestClusterStatusMonitorLifecycle.java
##########
@@ -288,16 +288,13 @@ public void testClusterStatusMonitorLifecycle() throws 
Exception {
     cleanupControllers();
     // Check if any MBeans leftover.
     // Note that MessageQueueStatus is not bound with controller only. So it 
will still exist.
-    final QueryExp exp2 = Query.and(
-        Query.not(Query.match(Query.attr("SensorName"), 
Query.value("MessageQueueStatus.*"))),
-        exp1);
+    final QueryExp exp2 = Query
+        .and(Query.not(Query.match(Query.attr("SensorName"), 
Query.value("MessageQueueStatus.*"))),
+            exp1);
 
-    // Note, the _asyncTasksThreadPool shutting down logic in 
GenericHelixController is best effort
-    // there is not guarantee that all threads in the pool is gone. Mossstly 
they will, but not always.
-    // see https://github.com/apache/helix/issues/1280
     boolean result = TestHelper.verify(() -> 
ManagementFactory.getPlatformMBeanServer()
         .queryMBeans(new ObjectName("ClusterStatus:*"), exp2).isEmpty(), 
TestHelper.WAIT_DURATION);
-    Assert.assertTrue(result,
-        "A small chance this may fail due to _asyncThread pool in controller 
may not shutdown in time. Please check issue 1280 to verify if this is the 
case.");
+    Assert.assertTrue(result, "Remaining MBeans: " + 
ManagementFactory.getPlatformMBeanServer()

Review comment:
       That is not clear to me. I prefer to validate by keeping tracking 
unstable tests.
   
   Moreover, the reason I remove issue 1280 is that it is an out-of-date 
comment. There is no clear relationship between that issue and the problem here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to