HeartSaVioR commented on pull request #32434:
URL: https://github.com/apache/spark/pull/32434#issuecomment-834190258


   Thanks for the contribution! The use case is interesting, especially the 
sink is sensitive to the number of batches and is sub-optimal for lots of small 
batches.
   
   One thing we need to deal with is AdmissionControl - from Spark 3.0, Spark 
community generalized the requirement of max offsets/max files per trigger into 
`SupportsAdmissionControl`. This is actually to make sure once trigger is not 
affected by max offsets/max files per trigger, but given the max offsets per 
trigger is generalized to `ReadMaxRows`, this is the another thing we may want 
to generalize.
   
   cc. @brkyvz as I see some opportunity to improve ReadLimit on this use case.
   
   Also cc. @tdas @zsxwing @viirya @gaborgsomogyi @xuanyuanking 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to