yangwwei commented on issue #90: Completed (and sometimes deleted) pods are 
still marked as "Running" and consume resources
URL: 
https://github.com/apache/incubator-yunikorn-core/issues/90#issuecomment-589903414
 
 
   I think the issue here is, when a Spark job completes, the Spark driver pod 
is left there like:
   
   ```
   NAME                            READY   STATUS      RESTARTS   AGE
   spark-pi-1582333424486-driver   0/1     Completed   0          15m
   ```
   
   if the pod exists, we cannot release the resource for it. our release logic 
is triggered when a pod is deleted from K8s. The question becomes: who should 
clean up Spark driver pods? 
   A not so bad idea would be, some people are doing this, run a cron-job do GC 
for completed Spark drivers, that can help to release the resources.
   
   @jameschen1519  what do you think?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to