Hello, I have a doubt about how Kubernetes' jobs are handled.
I have a queue to submit certain amount of incoming tasks (Celery and
RabbitMQ take care of this). Each one of this tasks is, in fact, a
*workflow* which will be executed in a worker (Celery worker with a DAG
executor inside). Each s
resources, Pods will remain "Pending" until resources
> become available. Cluster scalers may use pending pods as a signal that the
> cluster size should be increased.
>
> On Mon, Oct 10, 2016 at 5:58 AM, Diego Rodríguez Rodríguez <
> diego...@gmail.com > wrote:
;>
>> On Wednesday, October 12, 2016 at 1:28:45 PM UTC-4, Daniel Smith wrote:
>>>
>>> Ah, you're blocked because the quota check reconciles slowly. The quick
>>> fix is probably to just get more quota.
>>>
>>> +David who may know of an already ex
tes kubernetes
> jobs under the hood. You will have more flexibility, etc.
>
>
> On Thursday, October 13, 2016, Diego Rodríguez Rodríguez <
> diegorgz...@gmail.com> wrote:
>
>> I have already created an issue
>> <https://github.com/kubernetes/kubernetes/i
and Q&A wrote:
> Watch definitely should do the thing you want.
>
> In a loop:
> 1. List. Note the returned RV. This gives you the current state of the
> system.
> 2. Watch, passing the RV.
> 3. Process events until the watch closes or times out.
>
> On Thu, Oct
les, for me does not work
because as I could read in their docs, they do not support k8s jobs yet.
On 14 October 2016 at 10:36, Diego Rodríguez Rodríguez <
diegorgz...@gmail.com> wrote:
> This is what I am doing already
>
> def get_job_state(job_id):
>> response = requ
ry the
> watch feature.
>
> On Fri, Oct 14, 2016 at 5:15 AM, Diego Rodríguez Rodríguez <
> diegorgz...@gmail.com
> > wrote:
>
>> I have identified the problem. It has nothing to do with Kubernetes, it
>> is about how Python's requests module handl