Well restarting the pod actually does have a better chance of fixing whatever the issue is, rather than just restarting the container inside of it. The pod might very well get restarted on a different machine. If the machine the pod is running on is either down or hurting, then just restarting the container won't really help you.

On 2017-10-27 4:23 pm, Rodrigo Campos wrote:
I don't think it is configurable.

But I don't really see what you are trying to solve, maybe there is
another way to achieve it? If you are running a pod of a single
container, what is the problem that the container is restarted when is
appropriate instead of the whole pod?

I mean, you would need to handle the case where some container in the
pod crashed or is stalled, right? The liveness probe will be done
periodically, but until the next check is done, it can be hunged or
something. So even if the whole pod is restarted, that problem is
still there. And restarting the whole pod won't solve that. So
probably my guess is not correct about what you are trying to solve.

So, sorry, but can I ask again what is the problem you want to
address? :)

On Friday, October 27, 2017, David Rosenstrauch <dar...@darose.net>
wrote:

Was speaking to our admin here, and he offered that running a health
check container inside the same pod might work.  Anyone agree that
that would be a good (or even preferred) approach?

Thanks,

DR

On 2017-10-27 11:41 am, David Rosenstrauch wrote:

I have a pod which runs a single container.  The pod is being run
under a ReplicaSet (which starts a new pod to replace a pod that's
terminated).

What I'm seeing is that when the container within that pod
terminates,
instead of the pod terminating too, the pod stays alive, and just
restarts the container in it.  However I'm thinking that what
would
make more sense would be for the entire pod to terminate in this
situation, and then another would automatically start to replace
it.

Does this seem sensible?  If so, how would one accomplish this
with
k8s?  Changing the restart policy setting doesn't seem to be an
option.  The restart policy (e.g. Restart=Always) seems to apply
only
to whether to restart a pod; the decision about whether to restart
a
container in a pod doesn't seem to be configurable.  (At least not
that I could see.)

Would appreciate any guidance anyone could offer here.

Thanks,

DR

--
You received this message because you are subscribed to the Google
Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to
kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users
[1].
For more options, visit https://groups.google.com/d/optout [2].

 --
You received this message because you are subscribed to the Google
Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to
kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Links:
------
[1] https://groups.google.com/group/kubernetes-users
[2] https://groups.google.com/d/optout

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to