Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/22298#discussion_r216113001
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PythonTestsSuite.scala
---
@@ -72,12 +74,33 @@ private[spark] trait PythonTestsSuite { k8sSuite:
KubernetesSuite =>
isJVM = false,
pyFiles = Some(PYSPARK_CONTAINER_TESTS))
}
+
+ test("Run PySpark with memory customization", k8sTestTag) {
+ sparkAppConf
+ .set("spark.kubernetes.container.image", pySparkDockerImage)
+ .set("spark.kubernetes.pyspark.pythonVersion", "3")
+ .set("spark.kubernetes.memoryOverheadFactor",
s"$memOverheadConstant")
+ .set("spark.executor.pyspark.memory", s"${additionalMemory}m")
+ .set("spark.python.worker.reuse", "false")
--- End diff --
I don't believe this should be set. Worker reuse is on by default in most
systems so not sure if this test depends on worker reuse being false. As per
@rdblue's investigation this shouldn't impact this code path (and if it does we
need to re-open that investigation).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]