attilapiros commented on pull request #31513: URL: https://github.com/apache/spark/pull/31513#issuecomment-789621726
@holdenk As I found [a similar case](https://github.com/attilapiros/spark/blob/546d2eb5d46813a14c7bd30113fb6bb038cdd2fc/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocatorSuite.scala#L332-L335), expecting creating resource request in a specific order for multiple resource profiles, in an earlier test `SPARK-33288: multiple resource profiles`: ``` podsAllocatorUnderTest.setTotalExpectedExecutors(Map(defaultProfile -> 1, rp -> 2)) verify(podOperations).create(podWithAttachedContainerForId(1, defaultProfile.id)) verify(podOperations).create(podWithAttachedContainerForId(2, rp.id)) verify(podOperations).create(podWithAttachedContainerForId(3, rp.id)) ``` And nobody was complaining for flakiness for this old test. I started to wonder if these cases can really cause any flakiness or not. If there is a Map implementation behind for these small Maps which uses fix ordering we are fine here. And I really bumped into the problem by having many code changes to make upscaling more progressive (a possible next PR). But to be on the safe side better to use deterministic ordering [here](https://github.com/attilapiros/spark/blob/bb578d296e77af20b4c3a006ac0349a84630fdad/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala#L201-L202 ): ``` // The order we request executors for each ResourceProfile is not guaranteed. totalExpectedExecutorsPerResourceProfileId.asScala.foreach { case (rpId, targetNum) => ``` WDYT? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
