On Friday, August 4, 2017, 'Mark Betz' via Kubernetes user discussion and
Q&A <kubernetes-users@googlegroups.com> wrote:

> I've been doing some testing of jobs in our cluster to try and get some
> control over the behavior of one of our build pipelines, and I'm hoping
> someone here can show me what I'm missing because from what I've seen so
> far things look sad. Rather than a blow-by-blow I'll jump right to the
> tl;dr: the issue seems to be that the normal pod life cycle trumps what you
> would expect to be a normal life cycle for common classes of jobs.
>
> For example, we have a job that runs as a helm post-upgrade hook in a
> gitlab pipeline and performs a data migration. If the migration task exits
> non-zero then I want the job to fail, the helm command to fail, and the
> pipeline to fail.
>
> What actually happens depends on the job settings. With "restartPolicy"
> set to "OnFailure" the job controller keeps recreating the pod, which keeps
> failing, while helm waits. This stalls the gitlab runner (which is
> processing many builds) and causes various issues. With "restartPolicy" set
> to "Never" the job controller keeps creating new pods, while helm waits,
> with all the same ill effects.
>

Can you please elaborate what happens with restart policy never?

Do you refer to this issue:
https://github.com/kubernetes/kubernetes/issues/24533 ?

Don't remember, but it rings a bell that something like "sleep 10; command"
would be a workaround.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to