Alexander Yerenkow created SPARK-43496:
------------------------------------------

             Summary: Have a separate config for Memory limits for kubernetes 
pods
                 Key: SPARK-43496
                 URL: https://issues.apache.org/jira/browse/SPARK-43496
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 3.4.0
            Reporter: Alexander Yerenkow


Whole allocated memory to JVM is set into pod resources as both request and 
limits.

This means there's not a way to use more memory for burst-like jobs in a shared 
environment.

For example, if spark job uses external process (outside of JVM) to access 
data, a bit of extra memory required for that, and having configured higher 
limits for mem could be of use.

Another thought here - have a way to configure different JVM/ pod memory 
request also could be a valid use case.

 

Github PR: [https://github.com/apache/spark/pull/41067]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to