Github user kokes commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21092#discussion_r193805391
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 ---
    @@ -154,6 +176,24 @@ private[spark] object Config extends Logging {
           .checkValue(interval => interval > 0, s"Logging interval must be a 
positive time value.")
           .createWithDefaultString("1s")
     
    +  val MEMORY_OVERHEAD_FACTOR =
    +    ConfigBuilder("spark.kubernetes.memoryOverheadFactor")
    +      .doc("This sets the Memory Overhead Factor that will allocate memory 
to non-JVM jobs " +
    +        "which in the case of JVM tasks will default to 0.10 and 0.40 for 
non-JVM jobs")
    +      .doubleConf
    +      .checkValue(mem_overhead => mem_overhead >= 0 && mem_overhead < 1,
    +        "Ensure that memory overhead is a double between 0 --> 1.0")
    +      .createWithDefault(0.1)
    +
    +  val PYSPARK_MAJOR_PYTHON_VERSION =
    +    ConfigBuilder("spark.kubernetes.pyspark.pythonversion")
    +      .doc("This sets the python version. Either 2 or 3. (Python2 or 
Python3)")
    +      .stringConf
    +      .checkValue(pv => List("2", "3").contains(pv),
    +        "Ensure that Python Version is either Python2 or Python3")
    +      .createWithDefault("2")
    --- End diff --
    
    There is only ~18 months of support left for Python 2. Python 3 has been 
around for 10 years and unless there’s a good reason, I think it should be 
the default. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to