Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10838#discussion_r50269536
  
    --- Diff: docs/configuration.md ---
    @@ -392,6 +392,16 @@ Apart from these, the following properties are also 
available, and may be useful
       </td>
     </tr>
     <tr>
    +  <td>Int.MaxValue</td>
    +  <td>
    +    "spark.reducer.maxMbInFlight" puts a bound on the in flight data in 
terms of size.
    +    But this is not always sufficient when the number of hosts in the 
cluster increase,
    --- End diff --
    
    maxMBInFlight should be maxSizeInFlight
    
    change "to fetches" to "to fetch"
    
    Wrap the actual config variables with 
    
    I would prefer to see the description of spark.reducer.maxReqsInFlight 
first. Perhaps after that put something close to what you have about the 
maxMbInFlight. something like (feel free to change wording).
    
    maxReqsInFlight limits number ..... This is sometimes needed when you have 
a large number of hosts and which can lead to a very large number of in-bound 
connections...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to