Re: How can I cancel a Flink job safely without a special stop message in the stream?

2017-08-27 Thread Zor X.L.
Hi, We use kafka because: - it is a high throughput message queue   - we want to have about 2GB/s in/write and 7GB/s out/read perforamance (400B/msg) - it is popular (well this is kind of important...) - the input has a start and an end, but we want to process new data as soon as

Re: How can I cancel a Flink job safely without a special stop message in the stream?

2017-08-25 Thread Aljoscha Krettek
Hi, I don't think implementing a StoppableSource is an option here since you want to use the Flink Kafka source. What is your motivation for this? Especially, why are you using Kafka if the input is bounded and you will shut down the job at some point? Also, regarding StoppableSource: this

Re: How can I cancel a Flink job safely without a special stop message in the stream?

2017-08-14 Thread Nico Kruber
Hi, have you tried letting your source also implement the StoppableFunction interface as suggested by the SourceFunction javadoc? If a source is stopped, e.g. after identifying some special signal from the outside, it will continue processing all remaining events and the Flink program will

Re: How can I cancel a Flink job safely without a special stop message in the stream?

2017-08-14 Thread Zor X.L.
Bump... 在 2017/8/11 9:36, Zor X.L. 写道: Hi, What we want to do is cancelling the Flink job after all upstream data were processed. We use Kafka as our input and output, and use the SQL capability of Table API by the way. A possible solution is: * embed a stop message at the tail of