beliefer commented on a change in pull request #24671:
[SPARK-27811][Core][Docs]Improve docs about spark.driver.memoryOverhead and
spark.executor.memoryOverhead.
URL: https://github.com/apache/spark/pull/24671#discussion_r288371937
##########
File path: docs/configuration.md
##########
@@ -1233,6 +1246,9 @@ Apart from these, the following properties are also
available, and may be useful
<td>
If true, Spark will attempt to use off-heap memory for certain operations.
If off-heap memory
use is enabled, then <code>spark.memory.offHeap.size</code> must be
positive.
+ <em>Note:</em> If off-heap memory use is enabled or off-heap memory size
is increased,
Review comment:
OK, I have adjusted the documentation according to your reminder.
When Spark running in cluster mode (e.g. YARN), the size
`spark.memory.offHeap.size` is not accounted in total memory size of container.
As size of off-heap memory grows, increase the size of the non-heap memory
manually.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]