beliefer commented on a change in pull request #25309: 
[SPARK-28577][YARN]Resource capability requested for each executor add 
offHeapMemorySize 
URL: https://github.com/apache/spark/pull/25309#discussion_r319339008
 
 

 ##########
 File path: docs/configuration.md
 ##########
 @@ -250,17 +250,17 @@ of the most common options to set are:
  <td><code>spark.executor.memoryOverhead</code></td>
   <td>executorMemory * 0.10, with minimum of 384 </td>
   <td>
-    Amount of non-heap memory to be allocated per executor process in cluster 
mode, in MiB unless
+    Amount of additional memory to be allocated per executor process in 
cluster mode, in MiB unless
     otherwise specified. This is memory that accounts for things like VM 
overheads, interned strings,
     other native overheads, etc. This tends to grow with the executor size 
(typically 6-10%).
     This option is currently supported on YARN and Kubernetes.
     <br/>
-    <em>Note:</em> Non-heap memory includes off-heap memory 
-    (when <code>spark.memory.offHeap.enabled=true</code>) and memory used by 
other executor processes
-    (e.g. python process that goes with a PySpark executor) and memory used by 
other non-executor 
-    processes running in the same container. The maximum memory size of 
container to running executor 
-    is determined by the sum of <code>spark.executor.memoryOverhead</code> and 
-    <code>spark.executor.memory</code>.
+    <em>Note:</em> Additional memory includes PySpark executor memory 
 
 Review comment:
   I know the meaning of this PR. Maybe the new idea is a way. As I know, the 
origin decision is to unify all the different part.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to