Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r236485186
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -74,8 +74,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       private val reuseWorker = conf.getBoolean("spark.python.worker.reuse", 
true)
       // each python worker gets an equal part of the allocation. the worker 
pool will grow to the
       // number of concurrent tasks, which is determined by the number of 
cores in this executor.
    -  private val memoryMb = conf.get(PYSPARK_EXECUTOR_MEMORY)
    +  private val memoryMb = if (Utils.isWindows) {
    --- End diff --
    
    I don't understand what you're getting at here.
    
    The point Ryan and I are making is that there's no point in having this 
check in the Java code, because the Python code already handles it. It doesn't 
work on Windows, and the updated Python code handles that.
    
    The only shortcoming here is passing an env variable to Python that won't 
be used if you're on Windows. That's really super trivial and not a big deal at 
all.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to