Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23055#discussion_r234081475
--- Diff:
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
@@ -74,8 +74,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
private val reuseWorker = conf.getBoolean("spark.python.worker.reuse",
true)
// each python worker gets an equal part of the allocation. the worker
pool will grow to the
// number of concurrent tasks, which is determined by the number of
cores in this executor.
- private val memoryMb = conf.get(PYSPARK_EXECUTOR_MEMORY)
+ private val memoryMb = if (Utils.isWindows) {
--- End diff --
I see. I think the point of view is a bit different. What I was trying to
do is that:
we declare this configuration is not supported on Windows, meaning we
disable this configuration on Windows from the start, JVM side - because it's
JVM to launch Python workers. So, I was trying to leave the control to JVM.
> It seems brittle to disable this on the JVM side and rely on it here. Can
we also set a flag in the ImportError case and also check that here?
However, in a way, It's a bit odd to say it's brittle because we're already
relying on that.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]