skonto edited a comment on issue #24613: [SPARK-27549][SS] Add support for 
committing kafka offsets per batch for supporting external tooling
URL: https://github.com/apache/spark/pull/24613#issuecomment-494320693
 
 
   > One thing we can imagine is running multiple queries in same application - 
producer/consumer cache resides in executor which is not shutting down or 
cleared per query. So some of them will be still live even query is finished 
once executor is live.
   
   I am only setting the gId for the consumer that resides on the driver. I 
only commit offsets based on the progress observed per batch I dont care about 
executor consumers for micro-batch mode.
   That problem will appear with supporting continuous mode where committing 
will be done at the executor side like in Flink.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to