Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/22298#discussion_r214409455
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PythonTestsSuite.scala
---
@@ -72,12 +72,33 @@ private[spark] trait PythonTestsSuite { k8sSuite:
KubernetesSuite =>
isJVM = false,
pyFiles = Some(PYSPARK_CONTAINER_TESTS))
}
+
+ test("Run PySpark with memory customization", k8sTestTag) {
+ sparkAppConf
+ .set("spark.kubernetes.container.image",
s"${getTestImageRepo}/spark-py:${getTestImageTag}")
+ .set("spark.kubernetes.pyspark.pythonVersion", "3")
+ .set("spark.kubernetes.memoryOverheadFactor",
s"$memOverheadConstant")
--- End diff --
Do we expect people who configure the rlimit advanced feature to also set
the memoryOverheadConstant to a different value? If so we should call it out in
the docs. (note: I think it would make sense for folks to set this to a lower
value so I _think_ this would be the expected behaviour and we should document
but open to suggestions)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]