JoonPark1 commented on PR #7227:
URL: https://github.com/apache/kyuubi/pull/7227#issuecomment-3445128225

   @turboFei Sure! Once the kyuubi batch job times out because the elapsed time 
exceeds the configured submitTimeout property value (no spark driver is 
instantiated and in running state to handle the submitted batch job), the 
metadata about the spark application and the spark driver engine state is 
updated accordingly via "org.apache.kyuubi.server.metadata.MetadataManager" 
class' updateMetadata method which takes in the new up-to-date Metadata 
construct object instance (which is instance of class 
"org.apache.kyuubi.server.metadata.api.Metadata"). Then, internally within the 
manager class, the method calls upon the 
"org.apache.kyuubi.server.metadata.MetadataStore" class' updateMetadata method, 
which keeps the data regarding the state of each submitted kyuubi batch jobs 
utilizing spark compute engine as in-sync with the state of kyuubi's metadata 
store in relationalDB. As you can see, the whole flow does not need to invoke 
the BatchJobSubmission:: updateBatchMetadata to update the kyuu
 bi's metadata store instance. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to