This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 0d77d57  [MINOR][DOCS] Add a note that 'spark.executor.pyspark.memory' 
is dependent on 'resource'
0d77d57 is described below

commit 0d77d575e14e535fbe29b42e5612f3ddc64d42f4
Author: Hyukjin Kwon <gurwls...@apache.org>
AuthorDate: Thu Jan 31 15:51:40 2019 +0800

    [MINOR][DOCS] Add a note that 'spark.executor.pyspark.memory' is dependent 
on 'resource'
    
    ## What changes were proposed in this pull request?
    
    This PR adds a note that explicitly `spark.executor.pyspark.memory` is 
dependent on resource module's behaviours at Python memory usage.
    
    For instance, I at least see some difference at 
https://github.com/apache/spark/pull/21977#discussion_r251220966
    
    ## How was this patch tested?
    
    Manually built the doc.
    
    Closes #23664 from HyukjinKwon/note-resource-dependent.
    
    Authored-by: Hyukjin Kwon <gurwls...@apache.org>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 docs/configuration.md | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index 806e16a..5b5891e 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -190,8 +190,10 @@ of the most common options to set are:
     and it is up to the application to avoid exceeding the overhead memory 
space
     shared with other non-JVM processes. When PySpark is run in YARN or 
Kubernetes, this memory
     is added to executor resource requests.
-
-    NOTE: Python memory usage may not be limited on platforms that do not 
support resource limiting, such as Windows.
+    <br/>
+    <em>Note:</em> This feature is dependent on Python's `resource` module; 
therefore, the behaviors and 
+    limitations are inherited. For instance, Windows does not support resource 
limiting and actual 
+    resource is not limited on MacOS.
   </td>
 </tr>
 <tr>
@@ -223,7 +225,8 @@ of the most common options to set are:
     stored on disk. This should be on a fast, local disk in your system. It 
can also be a
     comma-separated list of multiple directories on different disks.
 
-    NOTE: In Spark 1.0 and later this will be overridden by SPARK_LOCAL_DIRS 
(Standalone), MESOS_SANDBOX (Mesos) or
+    <br/>
+    <em>Note:</em> This will be overridden by SPARK_LOCAL_DIRS (Standalone), 
MESOS_SANDBOX (Mesos) or
     LOCAL_DIRS (YARN) environment variables set by the cluster manager.
   </td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to