I have already configured my cluster to test what you have stated. What I
have done so far is to create a ResourceQuota which takes care that there
will not be more than 4 pods running. Then I ask for say 20 jobs to be
What happens in reality is that the first 4 jobs are completed and then,
even though the pods are completed and therefore resources are available,
it takes some minutes (around 4-5 minutes) to have another 4 jobs being
completed. Indeed, what you said is true, however it is no practical
because a delay of minutes can not be assumed.
The waiting jobs events look like this:
`FailedCreate Error creating: pods
"d59fa9b2-6c6b-4dc5-b149-7f89b35421bf-10-" is forbidden: Exceeded quota:
compute-resources, requested: pods=1, used: pods=4, limited: pods=4`
So, it fails because there are no resources but it is trying it again only
after some minutes. This behavior is far from the desired one, which is
relaying on Kubernetes for the execution of a set of tasks no matter the
resources available, just getting them executed as soon as possible.
On Monday, 10 October 2016 19:24:46 UTC+2, Daniel Smith wrote:
> If the system lacks resources, Pods will remain "Pending" until resources
> become available. Cluster scalers may use pending pods as a signal that the
> cluster size should be increased.
> On Mon, Oct 10, 2016 at 5:58 AM, Diego Rodríguez Rodríguez <
>> Hello, I have a doubt about how Kubernetes' jobs are handled.
>> I have a queue to submit certain amount of incoming tasks (Celery and
>> RabbitMQ take care of this). Each one of this tasks is, in fact, a
>> *workflow* which will be executed in a worker (Celery worker with a DAG
>> executor inside). Each step of the *workflow* is a Docker image with an
>> input file and an output file.
>> My question is, if I submit jobs from the workflow engine directly to the
>> Kubernetes API, what happens I at some point there are no more resources?
>> Will the remaining tasks be kept or will they be lost? My goal is to treat
>> Kubernetes' jobs as a black box to submit works to. This works are of a
>> very different and heterogeneous nature and I wouldn't need to bother with
>> what is inside them because they are dockerized and executed by Kubernetes
>> at some point.
>> To sum up, I already have the layer of Celery workers with a DAG executor
>> inside which knows the right order of the tasks and knows how to manage
>> everything concerning the *workflow*. These components will submit jobs
>> (through Kubernetes API) and will wait for them to be executed and then
>> continue with the remaining tasks asking Kubernetes to run them until the
>> *workflow* ends.
>> I have read about a somehow related issue in Github:
>> I couldn't determine if this is closed or it is coming in a future
>> Thanks in advance!
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> To post to this group, send email to kubernet...@googlegroups.com
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.