Github user ifilonenko commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22298#discussion_r214411102
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
 ---
    @@ -51,6 +51,7 @@ private[spark] class BasicDriverFeatureStep(
         .get(DRIVER_MEMORY_OVERHEAD)
         .getOrElse(math.max((conf.get(MEMORY_OVERHEAD_FACTOR) * 
driverMemoryMiB).toInt,
           MEMORY_OVERHEAD_MIN_MIB))
    +  // TODO: Have memory limit checks on driverMemory
    --- End diff --
    
    I wanted to get an opinion from people (@mccheah) into whether or not we 
wanted to let the K8S api to handle memory limits (via ResourceQuota limit 
errors) or whether we wanted to catch it in a Spark exception (if we were to 
include a configuration for memory limits)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to