cdmikechen commented on PR #989:
URL: https://github.com/apache/submarine/pull/989#issuecomment-1254567408

   > > @FatalLin I think there should be 2 ways to do this:
   > > 
   > > 1. the pod's status has different values in different cases, and we can 
block the program processing based on the status. If the status of the pod is 
successful, the program will be executed backwards, and if the pod fails, an 
exception can be thrown.
   > > 
   > > ```yaml
   > > status:
   > >   phase: Running
   > > ---
   > > status:
   > >   phase: Failed
   > > ---
   > > status:
   > >   phase: Succeeded
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > 
   > > 2. synchronise the execution of the prehandler to the database and 
determine if it is blocked based on the current status of the specified 
prehandler.
   > > 
   > > I prefer the second approach, given that we may need to probably be 
tracking the dataset detail during experiment and serving. We can discuss the 
details in more detail in the next regular meeting. Right now I will merge it. 
cc @pingsutw
   > 
   > I mean WHERE you want to add the blockers? another initContainer for each 
experiment pods?
   
   My assumption is that the implementation of the blocking logic is not done 
at the k8s level, but should be done in the way that `submarine-sdk` handles 
this.
   Assuming we use the second option, the first worker will create the 
prehandler's pod resource and add/update the status of the dataset row in 
database to `pulling`. the second worker will first check if the current 
request to pull the data is executing, and if it finds the dataset is pulling 
then block the process and wait for the task to finish.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to