Github user liuxianjiao commented on a diff in the pull request:
    --- Diff: docs/ ---
    @@ -2288,6 +2288,13 @@ showDF(properties, numRows = 200, truncate = FALSE)
         on the receivers.
    +  <td><code>spark.streaming.concurrentJobs</code></td>
    +  <td>1</td>
    +  <td>
    +    The number of concurrent jobs.This parameter directly affects the 
number of threads in the jobExecutor thread pool.
    --- End diff --
    @srowen I see, but this more or less help us uderstand this configuration. 
How can we stop eating for fear of choking? To better explain its 
behavior,maybe we could make this description more specific rather than giving 
it up.


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to