This is an automated email from the ASF dual-hosted git repository.

rexxiong pushed a commit to branch branch-0.5
in repository https://gitbox.apache.org/repos/asf/celeborn.git


The following commit(s) were added to refs/heads/branch-0.5 by this push:
     new e56f84b7e [CELEBORN-1727][FOLLOWUP] Fix CelebornHashCheckDiskSuite 
flaky test
e56f84b7e is described below

commit e56f84b7e90473b13c70781ed052f4f8cad6698d
Author: onebox-li <[email protected]>
AuthorDate: Fri Nov 22 17:13:54 2024 +0800

    [CELEBORN-1727][FOLLOWUP] Fix CelebornHashCheckDiskSuite flaky test
    
    ### What changes were proposed in this pull request?
    Fix CelebornHashCheckDiskSuite flaky test .
    
    ### Why are the changes needed?
    Ditto.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    UT.
    
    Closes #2937 from onebox-li/fix-flaky-test.
    
    Authored-by: onebox-li <[email protected]>
    Signed-off-by: Shuang <[email protected]>
    (cherry picked from commit 05ccd96905e90cc3847a733a9520e7de8cfc3ed7)
    Signed-off-by: Shuang <[email protected]>
---
 .../org/apache/celeborn/tests/spark/CelebornHashCheckDiskSuite.scala   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/tests/spark-it/src/test/scala/org/apache/celeborn/tests/spark/CelebornHashCheckDiskSuite.scala
 
b/tests/spark-it/src/test/scala/org/apache/celeborn/tests/spark/CelebornHashCheckDiskSuite.scala
index eafabbbe6..7ac2ac48c 100644
--- 
a/tests/spark-it/src/test/scala/org/apache/celeborn/tests/spark/CelebornHashCheckDiskSuite.scala
+++ 
b/tests/spark-it/src/test/scala/org/apache/celeborn/tests/spark/CelebornHashCheckDiskSuite.scala
@@ -38,9 +38,8 @@ class CelebornHashCheckDiskSuite extends SparkTestBase {
       CelebornConf.APPLICATION_HEARTBEAT_TIMEOUT.key -> "10s")
     val workerConf = Map(
       CelebornConf.WORKER_STORAGE_DIRS.key -> "/tmp:capacity=1000",
-      CelebornConf.WORKER_HEARTBEAT_TIMEOUT.key -> "10s",
       CelebornConf.WORKER_DISK_RESERVE_SIZE.key -> "0G")
-    workers = setupMiniClusterWithRandomPorts(masterConf, workerConf)._2
+    workers = setupMiniClusterWithRandomPorts(masterConf, workerConf, 2)._2
   }
 
   override def beforeEach(): Unit = {

Reply via email to