HeartSaVioR commented on PR #46651:
URL: https://github.com/apache/spark/pull/46651#issuecomment-2119554683

   > For python streaming sink, since StreamingWrite is also created per 
microbatch in scala side, long running worker cannot be attached to s 
StreamingWrite instance. Therefore we abandon the long running worker 
architecture, simply call commit() or abort() and exit the worker and allow 
spark to reuse worker for us.
   
   Ah OK, it's unable to be reused anyway. Then makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to