This is an automated email from the ASF dual-hosted git repository. dongjoon pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push: new f9feddfbc9de [SPARK-46799][CORE][TESTS] Improve `MasterSuite` to use nanoTime-based appIDs and workerIDs f9feddfbc9de is described below commit f9feddfbc9de8e87f7a2e9d8abade7e687335b84 Author: Dongjoon Hyun <dh...@apple.com> AuthorDate: Mon Jan 22 16:34:26 2024 -0800 [SPARK-46799][CORE][TESTS] Improve `MasterSuite` to use nanoTime-based appIDs and workerIDs ### What changes were proposed in this pull request? This PR aims to improve `MasterSuite` to use nanoTime-based appIDs and workerIDs. ### Why are the changes needed? During testing, I hit a case where two workers have the same ID. This PR will prevent the duplicated IDs in Apps and Workers in `MasterSuite`. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs. ``` $ build/sbt "core/testOnly *.MasterSuite" [info] MasterSuite: [info] - can use a custom recovery mode factory (443 milliseconds) [info] - SPARK-46664: master should recover quickly in case of zero workers and apps (38 milliseconds) [info] - master correctly recover the application (41 milliseconds) [info] - SPARK-46205: Recovery with Kryo Serializer (27 milliseconds) [info] - SPARK-46216: Recovery without compression (19 milliseconds) [info] - SPARK-46216: Recovery with compression (20 milliseconds) [info] - SPARK-46258: Recovery with RocksDB (306 milliseconds) [info] - master/worker web ui available (197 milliseconds) [info] - master/worker web ui available with reverseProxy (30 seconds, 123 milliseconds) [info] - master/worker web ui available behind front-end reverseProxy (30 seconds, 113 milliseconds) [info] - basic scheduling - spread out (23 milliseconds) [info] - basic scheduling - no spread out (14 milliseconds) [info] - basic scheduling with more memory - spread out (10 milliseconds) [info] - basic scheduling with more memory - no spread out (10 milliseconds) [info] - scheduling with max cores - spread out (9 milliseconds) [info] - scheduling with max cores - no spread out (9 milliseconds) [info] - scheduling with cores per executor - spread out (9 milliseconds) [info] - scheduling with cores per executor - no spread out (8 milliseconds) [info] - scheduling with cores per executor AND max cores - spread out (8 milliseconds) [info] - scheduling with cores per executor AND max cores - no spread out (7 milliseconds) [info] - scheduling with executor limit - spread out (8 milliseconds) [info] - scheduling with executor limit - no spread out (7 milliseconds) [info] - scheduling with executor limit AND max cores - spread out (8 milliseconds) [info] - scheduling with executor limit AND max cores - no spread out (9 milliseconds) [info] - scheduling with executor limit AND cores per executor - spread out (8 milliseconds) [info] - scheduling with executor limit AND cores per executor - no spread out (13 milliseconds) [info] - scheduling with executor limit AND cores per executor AND max cores - spread out (8 milliseconds) [info] - scheduling with executor limit AND cores per executor AND max cores - no spread out (7 milliseconds) [info] - scheduling for app with multiple resource profiles (44 milliseconds) [info] - scheduling for app with multiple resource profiles with max cores (37 milliseconds) [info] - SPARK-45174: scheduling with max drivers (9 milliseconds) [info] - SPARK-13604: Master should ask Worker kill unknown executors and drivers (15 milliseconds) [info] - SPARK-20529: Master should reply the address received from worker (20 milliseconds) [info] - SPARK-27510: Master should avoid dead loop while launching executor failed in Worker (34 milliseconds) [info] - All workers on a host should be decommissioned (28 milliseconds) [info] - No workers should be decommissioned with invalid host (25 milliseconds) [info] - Only worker on host should be decommissioned (19 milliseconds) [info] - SPARK-19900: there should be a corresponding driver for the app after relaunching driver (2 seconds, 60 milliseconds) [info] - assign/recycle resources to/from driver (33 milliseconds) [info] - assign/recycle resources to/from executor (27 milliseconds) [info] - resource description with multiple resource profiles (1 millisecond) [info] - SPARK-45753: Support driver id pattern (7 milliseconds) [info] - SPARK-45753: Prevent invalid driver id patterns (6 milliseconds) [info] - SPARK-45754: Support app id pattern (7 milliseconds) [info] - SPARK-45754: Prevent invalid app id patterns (7 milliseconds) [info] - SPARK-45785: Rotate app num with modulo operation (114 milliseconds) [info] - SPARK-45756: Use appName for appId (7 milliseconds) [info] - SPARK-46353: handleRegisterWorker in STANDBY mode (10 milliseconds) [info] - SPARK-46353: handleRegisterWorker in RECOVERING mode without workers (7 milliseconds) [info] - SPARK-46353: handleRegisterWorker in RECOVERING mode with a unknown worker (33 milliseconds) [info] Run completed in 1 minute, 5 seconds. [info] Total number of tests run: 50 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 50, failed 0, canceled 0, ignored 0, pending 0 [info] All tests passed. ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44839 from dongjoon-hyun/SPARK-46799. Authored-by: Dongjoon Hyun <dh...@apple.com> Signed-off-by: Dongjoon Hyun <dh...@apple.com> --- core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala b/core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala index e7146747b843..aefbe908b50b 100644 --- a/core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala +++ b/core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala @@ -957,7 +957,7 @@ class MasterSuite extends SparkFunSuite val desc = new ApplicationDescription( "test", maxCores, null, "", rp, None, None, initialExecutorLimit) - val appId = System.currentTimeMillis.toString + val appId = System.nanoTime().toString val endpointRef = mock(classOf[RpcEndpointRef]) val mockAddress = mock(classOf[RpcAddress]) when(endpointRef.address).thenReturn(mockAddress) @@ -966,7 +966,7 @@ class MasterSuite extends SparkFunSuite } private def makeWorkerInfo(memoryMb: Int, cores: Int): WorkerInfo = { - val workerId = System.currentTimeMillis.toString + val workerId = System.nanoTime().toString val endpointRef = mock(classOf[RpcEndpointRef]) val mockAddress = mock(classOf[RpcAddress]) when(endpointRef.address).thenReturn(mockAddress) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org