WeichenXu123 edited a comment on issue #24734: [SPARK-27870][SQL][PySpark] 
Flush each batch for pandas UDF (for improving pandas UDFs pipeline)
URL: https://github.com/apache/spark/pull/24734#issuecomment-497634309
 
 
   @BryanCutler But python side write buffer size is hardcoded... and I doubt 
Scala side `spark.buffer.size` used in spark widely in other buffers config.
   Another issue is that user is hard to estimate the accurate batch size in 
bytes. In contrast, per batch flushing is more accurate, only when one batch 
generated we need flushed it.
   
   We can discuss in two case:
   1) In most cases, we always use default batch size 
(`spark.sql.execution.arrow.maxRecordsPerBatch`=10000), this case one batch in 
bytes is large, so per batch flushing will only influence performance slightly.
   2) In some cases such as ML realtime inference case, we make 
`spark.sql.execution.arrow.maxRecordsPerBatch` small, this case per batch 
flushing show its performance advantage (See above discussion)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to