This is an automated email from the ASF dual-hosted git repository.

chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git


The following commit(s) were added to refs/heads/master by this push:
     new 2c55a1fda [KYUUBI #4749]  Fix flaky test issues in SchedulerPoolSuite
2c55a1fda is described below

commit 2c55a1fdafc4ca1bae2a131bfb84d7d736166585
Author: huangzhir <[email protected]>
AuthorDate: Fri Apr 21 20:44:10 2023 +0800

    [KYUUBI #4749]  Fix flaky test issues in SchedulerPoolSuite
    
    ### _Why are the changes needed?_
    
    To fix issue https://github.com/apache/kyuubi/issues/4713, a PR  
https://github.com/apache/kyuubi/pull/4714 was submitted, but it had Flaky test 
issues. After 50 local tests, it succeeded 38 times and failed 12 times.
    This PR addresses the issue of flaky tests.
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including 
negative and positive cases if possible
    
    - [ ] Add screenshots for manual tests if appropriate
    
    - [x] [Run 
test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests)
 locally before make a pull request
    
    Closes #4749 from huangzhir/fixtest-schedulerpool.
    
    Closes #4749
    
    2d2e14069 [huangzhir] call KyuubiSparkContextHelper.waitListenerBus() to 
make sure there are no more events in the spark event queue
    52a34d287 [fwang12] [KYUUBI #4746] Do not recreate async request executor 
if has been shutdown
    d4558ea82 [huangzhir] Merge branch 'master' into fixtest-schedulerpool
    44c4cefff [huangzhir] make sure the SparkListener has received the finished 
events for job1 and job2.
    8a753e924 [huangzhir] make sure job1 started before job2
    e66ede214 [huangzhir] fixbug TEST SchedulerPoolSuite  a false positive 
result
    
    Lead-authored-by: huangzhir <[email protected]>
    Co-authored-by: fwang12 <[email protected]>
    Signed-off-by: Cheng Pan <[email protected]>
---
 .../test/scala/org/apache/kyuubi/engine/spark/SchedulerPoolSuite.scala | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/SchedulerPoolSuite.scala
 
b/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/SchedulerPoolSuite.scala
index 43bd3f4db..a07f7d783 100644
--- 
a/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/SchedulerPoolSuite.scala
+++ 
b/externals/kyuubi-spark-sql-engine/src/test/scala/org/apache/kyuubi/engine/spark/SchedulerPoolSuite.scala
@@ -21,6 +21,7 @@ import java.util.concurrent.Executors
 
 import scala.concurrent.duration.SECONDS
 
+import org.apache.spark.KyuubiSparkContextHelper
 import org.apache.spark.scheduler.{SparkListener, SparkListenerJobEnd, 
SparkListenerJobStart}
 import org.scalatest.concurrent.PatienceConfiguration.Timeout
 import org.scalatest.time.SpanSugar.convertIntToGrainOfTime
@@ -101,6 +102,8 @@ class SchedulerPoolSuite extends WithSparkSQLEngine with 
HiveJDBCTestHelper {
       })
       threads.shutdown()
       threads.awaitTermination(20, SECONDS)
+      // make sure the SparkListener has received the finished events for job1 
and job2.
+      KyuubiSparkContextHelper.waitListenerBus(spark)
       // job1 should be started before job2
       assert(job1StartTime < job2StartTime)
       // job2 minShare is 2(total resource) so that job1 should be allocated 
tasks after

Reply via email to