rangadi commented on code in PR #42460:
URL: https://github.com/apache/spark/pull/42460#discussion_r1296263054


##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingForeachBatchHelper.scala:
##########
@@ -102,6 +118,9 @@ object StreamingForeachBatchHelper extends Logging {
       //     This is because MicroBatch execution clones the session during 
start.
       //     The session attached to the foreachBatch dataframe is different 
from the one the one
       //     the query was started with. `sessionHolder` here contains the 
latter.
+      //     Another issue with not creating new session id: foreachBatch 
worker keeps
+      //     the session alive. The session mapping at Connect server does not 
expire and query
+      //     keeps running even if the original client disappears. This keeps 
the query running.

Review Comment:
   FYI: @juliuszsompolski, @WweiL, @bogao007
   Will do this in next follow up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to