WeichenXu123 edited a comment on issue #24734: [SPARK-27870][SQL][PySpark] 
Flush each batch for pandas UDF (for improving pandas UDFs pipeline)
URL: https://github.com/apache/spark/pull/24734#issuecomment-497297633
 
 
   @felixcheung The performance matters in the case batch-size small and UDF do 
heavy computation.
   Suppose two UDF get pipelined, each UDF consume 3s on each batch (and 
suppose the worker node have enough cores to parallely run the 2 UDFs). And 
suppose the buffer can hold 100 batches. Then:
   
   My PR:  The first 100 batches computation will consume time about: 100 * 3s.
   Master code: The first 100 batches computation will consume time about 100 * 
(3 + 3) s.
   
   **Typical scenario**
   In the machine learning case:
   1) in order to make realtime prediction, we will make batch size very small.
   2) in ML prediction we will schedule the computation to GPU, so UDF 
computation on each batch will consume massive time.
   3) ML prediction output will usually be a label (scalar value), so the 
output batch is small in bytes, so there will be many output batches 
accumulated inside the Python UDF process output buffer, this make downstream 
UDFs lag behind. See example above.
   
   That's the difference. Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to