Hi Till

 

Thank you for your reply. Just like your suggestion, in our current implements, 
we periodically check the free slots through REST API, and then submitting jobs 
once slots are enough. 

However, since there is a concept of ‘Flink cluster’, why can't we think about 
‘cluster scheduling strategy’? The strategy may be different between 
long-running jobs and batch jobs. Could you please think about it?

 

Xinyu Zhang

 

From: Till Rohrmann [mailto:trohrm...@apache.org] 
Sent: Wednesday, January 02, 2019 10:58 PM
To: 张馨予 <wsz...@gmail.com>
Cc: user <user@flink.apache.org>
Subject: Re: Are Jobs allowed to be pending when slots are not enough

 

Hi Xinyu,

 

at the moment there is no such functionality in Flink. Whenever you submit a 
job, Flink will try to execute the job right away. If the job cannot get enough 
slots, then it will wait until the slot.request.timeout occurs and either fail 
or retry if you have a RestartStrategy configured.

 

If you want to wait until you have enough slots before submitting a job, I 
would suggest that you write yourself a small service which uses Flink's REST 
API [1] to query the status and finally submit the job if there are enough free 
slots.

 

[1] 
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/rest_api.html#overview-1

 

Cheers,

Till

 

On Wed, Jan 2, 2019 at 2:09 PM 张馨予 <wsz...@gmail.com <mailto:wsz...@gmail.com> 
> wrote:

Hi all

 

We submit some batch jobs to a Flink cluster which with 500 slots for example. 
The parallelism of these jobs may be different, between 1 and 500. 

Is there any configuration that can make jobs running in submitting order once 
the cluster has enough slots? If not, could we meet this requirement?

 

Thanks.

 

Xinyu Zhang

Reply via email to