Github user koeninger commented on the issue:

    https://github.com/apache/spark/pull/14361
  
    - This is testing RateEstimator, not maxRatePerPartition.  I didn't write 
the rate estimator code, but my understanding of the rate expressed there is 
that it is on a per-stream basis, not a per-partition basis.  So your 
explanation of why partitions need to be reduced to 1 doesn't make sense to me.
    
    - Even if that is the case, it seems like a better idea to fix the expected 
sizes, not limit to 1 partition, because people will be using backpressure with 
multi-partition topics
    
    - This test exists in both 0.8 and 0.10, but this patch only applies to 0.10
    
    Just as kind of a meta-comment, I'm not sure what the point of asking for 
feedback is if it's going to be merged to master within 5 hours regardless.  I 
was asleep during that entire time.  I understand the rush for 2.0, and I'm not 
trying to play the "Apache Process" card or get in your face... I'd just ask 
you to consider the reasoning involved.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to