Thank you, Chesnay
to make sure - should the node where the job has been submitted goes down, the 
processing will continue, I hope?
Do I need to ensure this by configuration?

btw I added --detached param to the run cmd, but it didn't go into background 
process as I would've expected. Am I guessing wrong?

Thanks!
Rob






 >-------- Оригинално писмо --------

 >От: Chesnay Schepler ches...@apache.org

 >Относно: Re: how many 'run -c' commands to start?

 >До: user@flink.apache.org

 >Изпратено на: 28.09.2017 15:05



 
> Hi!
 
> 
 
> Given a Flink cluster, you would only call `flink run ...` to submit a 
 
> job once; for simplicity i would submit it on the node where you started 
 
> the cluster. Flink will automatically distribute job across the cluster, 
 
> in smaller independent parts known as Tasks.
 
> 
 
> Regards,
 
> Chesnay
 
> 
 
> On 28.09.2017 08:31, r. r. wrote:
 
> > Hello
 
> >
 
> > I successfully ran a job with 'flink run -c', but this is for the local
 
> >
 
> > setup.
 
> >
 
> > How should i proceed with a cluster? Will flink automagically instantiate
 
> >
 
> > the job on all servers - i hope i don't have to start 'flink run -c' on all
 
> >
 
> > machines.
 
> >
 
> > New to flink and bigdata, so sorry for the probably silly question
 
> >
 
> >
 
> >
 
> > Thanks!
 
> >
 
> > Rob
 
> >
 
> >

Reply via email to