beliefer commented on a change in pull request #24671: 
[SPARK-27811][Core][Docs]Improve docs about spark.driver.memoryOverhead and 
spark.executor.memoryOverhead.
URL: https://github.com/apache/spark/pull/24671#discussion_r287909588
 
 

 ##########
 File path: docs/configuration.md
 ##########
 @@ -181,10 +181,16 @@ of the most common options to set are:
   <td><code>spark.driver.memoryOverhead</code></td>
   <td>driverMemory * 0.10, with minimum of 384 </td>
   <td>
-    The amount of off-heap memory to be allocated per driver in cluster mode, 
in MiB unless
-    otherwise specified. This is memory that accounts for things like VM 
overheads, interned strings, 
-    other native overheads, etc. This tends to grow with the container size 
(typically 6-10%). 
-    This option is currently supported on YARN, Mesos and Kubernetes.
+    Amount of non-heap memory to be allocated per driver process in cluster 
mode 
+    (e.g YARN, Mesos and Kubernetes.), in MiB unless otherwise specified. This 
is memory that
+    accounts for things like VM overheads, interned strings, other native 
overheads, etc. 
+    This tends to grow with the container size (typically 6-10%). 
+    <em>Note:</em> Non-heap memory including off-heap memory 
 
 Review comment:
   OK. I changed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to