Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r234080578
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -74,8 +74,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       private val reuseWorker = conf.getBoolean("spark.python.worker.reuse", 
true)
       // each python worker gets an equal part of the allocation. the worker 
pool will grow to the
       // number of concurrent tasks, which is determined by the number of 
cores in this executor.
    -  private val memoryMb = conf.get(PYSPARK_EXECUTOR_MEMORY)
    +  private val memoryMb = if (Utils.isWindows) {
    --- End diff --
    
    I don't think this is necessary. If `resource` can't be imported for any 
reason, then memory will not be limited in python. But the JVM side shouldn't 
be what determines whether that happens. The JVM should do everything the same 
way -- even requesting memory from schedulers like YARN because that space 
should still be allocated as python memory, even if python can't self-limit.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to