HyukjinKwon commented on a change in pull request #23664: [MINOR][DOCS] Add a 
note that 'spark.executor.pyspark.memory' is dependent on 'resource'
URL: https://github.com/apache/spark/pull/23664#discussion_r251273045
 
 

 ##########
 File path: docs/configuration.md
 ##########
 @@ -190,8 +190,10 @@ of the most common options to set are:
     and it is up to the application to avoid exceeding the overhead memory 
space
     shared with other non-JVM processes. When PySpark is run in YARN or 
Kubernetes, this memory
     is added to executor resource requests.
-
-    NOTE: Python memory usage may not be limited on platforms that do not 
support resource limiting, such as Windows.
+    <br/>
+    <em>Note:</em> Python memory usage may not be limited on platforms that do 
not support resource limiting, such as Windows.
+    <br/>
+    <em>Note:</em> Python memory usage is dependent on Python's 'resource' 
module; therefore, the behaviors and limitations are inherited.
 
 Review comment:
   One difference between Windows and the current case is that, it says the 
memory limit is set:
   
   ```
   Current mem limits: 9223372036854775807 of max 9223372036854775807
   
   Setting mem limits to 1048576 of max 1048576
   ```
   
   but it's not set.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to