Repository: spark
Updated Branches:
  refs/heads/master e3e2b5da3 -> 960298ee6


[SPARK-20858][DOC][MINOR] Document ListenerBus event queue size

## What changes were proposed in this pull request?

This change adds a new configuration option 
`spark.scheduler.listenerbus.eventqueue.size` to the configuration docs to 
specify the capacity of the spark listener bus event queue. Default value is 
10000.

This is doc PR for 
[SPARK-15703](https://issues.apache.org/jira/browse/SPARK-15703).

I added option to the `Scheduling` section, however it might be more related to 
`Spark UI` section.

## How was this patch tested?

Manually verified correct rendering of configuration option.

Author: sadikovi <[email protected]>
Author: Ivan Sadikov <[email protected]>

Closes #18476 from sadikovi/SPARK-20858.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/960298ee
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/960298ee
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/960298ee

Branch: refs/heads/master
Commit: 960298ee66b9b8a80f84df679ce5b4b3846267f4
Parents: e3e2b5d
Author: sadikovi <[email protected]>
Authored: Wed Jul 5 14:40:44 2017 +0100
Committer: Sean Owen <[email protected]>
Committed: Wed Jul 5 14:40:44 2017 +0100

----------------------------------------------------------------------
 docs/configuration.md | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/960298ee/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index bd6a1f9..c785a66 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -725,7 +725,7 @@ Apart from these, the following properties are also 
available, and may be useful
   <td><code>spark.ui.retainedJobs</code></td>
   <td>1000</td>
   <td>
-    How many jobs the Spark UI and status APIs remember before garbage 
collecting. 
+    How many jobs the Spark UI and status APIs remember before garbage 
collecting.
     This is a target maximum, and fewer elements may be retained in some 
circumstances.
   </td>
 </tr>
@@ -733,7 +733,7 @@ Apart from these, the following properties are also 
available, and may be useful
   <td><code>spark.ui.retainedStages</code></td>
   <td>1000</td>
   <td>
-    How many stages the Spark UI and status APIs remember before garbage 
collecting. 
+    How many stages the Spark UI and status APIs remember before garbage 
collecting.
     This is a target maximum, and fewer elements may be retained in some 
circumstances.
   </td>
 </tr>
@@ -741,7 +741,7 @@ Apart from these, the following properties are also 
available, and may be useful
   <td><code>spark.ui.retainedTasks</code></td>
   <td>100000</td>
   <td>
-    How many tasks the Spark UI and status APIs remember before garbage 
collecting. 
+    How many tasks the Spark UI and status APIs remember before garbage 
collecting.
     This is a target maximum, and fewer elements may be retained in some 
circumstances.
   </td>
 </tr>
@@ -1390,6 +1390,15 @@ Apart from these, the following properties are also 
available, and may be useful
   </td>
 </tr>
 <tr>
+  <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
+  <td>10000</td>
+  <td>
+    Capacity for event queue in Spark listener bus, must be greater than 0. 
Consider increasing
+    value (e.g. 20000) if listener events are dropped. Increasing this value 
may result in the
+    driver using more memory.
+  </td>
+</tr>
+<tr>
   <td><code>spark.blacklist.enabled</code></td>
   <td>
     false
@@ -1475,8 +1484,8 @@ Apart from these, the following properties are also 
available, and may be useful
   <td><code>spark.blacklist.application.fetchFailure.enabled</code></td>
   <td>false</td>
   <td>
-    (Experimental) If set to "true", Spark will blacklist the executor 
immediately when a fetch 
-    failure happenes. If external shuffle service is enabled, then the whole 
node will be 
+    (Experimental) If set to "true", Spark will blacklist the executor 
immediately when a fetch
+    failure happenes. If external shuffle service is enabled, then the whole 
node will be
     blacklisted.
   </td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to