Hi,
We use kafka because:
- it is a high throughput message queue
- we want to have about 2GB/s in/write and 7GB/s out/read
perforamance (400B/msg)
- it is popular (well this is kind of important...)
- the input has a start and an end, but we want to process new data as
soon as
Hi,
I don't think implementing a StoppableSource is an option here since you want
to use the Flink Kafka source. What is your motivation for this? Especially,
why are you using Kafka if the input is bounded and you will shut down the job
at some point?
Also, regarding StoppableSource: this
Hi,
have you tried letting your source also implement the StoppableFunction
interface as suggested by the SourceFunction javadoc?
If a source is stopped, e.g. after identifying some special signal from the
outside, it will continue processing all remaining events and the Flink
program will
Bump...
在 2017/8/11 9:36, Zor X.L. 写道:
Hi,
What we want to do is cancelling the Flink job after all upstream data
were processed.
We use Kafka as our input and output, and use the SQL capability of
Table API by the way.
A possible solution is:
*
embed a stop message at the tail of