srowen commented on a change in pull request #26529:
[SPARK-29902][Doc][Minor]Add listener event queue capacity configuration to
documentation
URL: https://github.com/apache/spark/pull/26529#discussion_r346545164
##########
File path: docs/configuration.md
##########
@@ -1857,6 +1857,61 @@ Apart from these, the following properties are also
available, and may be useful
driver using more memory.
</td>
</tr>
+<tr>
+ <td><code>spark.scheduler.listenerbus.eventqueue.shared.capacity</code></td>
+ <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
+ <td>
+ Capacity for shared event queue in Spark listener bus, must be greater
than 0.
+ Unless otherwise specified it uses
<code>spark.scheduler.listenerbus.eventqueue.capacity</code>.
+ shared event queue hold events for external listener that register to
listener bus.
+ Consider increasing value if listener events corresponding to shared queue
are dropped.
+ Increasing this value may result in the driver using more memory.
+ </td>
+</tr>
+<tr>
+
<td><code>spark.scheduler.listenerbus.eventqueue.appStatus.capacity</code></td>
+ <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
+ <td>
+ Capacity for appStatus event queue in Spark listener bus, must be greater
than 0.
Review comment:
"appStatus event queue hold events for internal application status
listeners" is useful; "Capacity for appStatus event queue in Spark listener
bus" seems sort of redundant. I'd maybe lead with "Capacity of the appStatus
event queue, which holds events ..." just once. Here and elsewhere.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]