Hi,
I am using Flink 1.4 along with Kafka 0.11. My stream job has 4 Kafka
consumers each subscribing to 4 different topics. The stream from each
consumer gets processed in 3 to 4 different ways there by writing to a
total of 12 sinks (cassandra tables). When the job runs, up to 8 or 10
records
In Kafka08Fetcher, it use Map to manage
multi-threads. But I notice in Kafka09Fetcher or Kafka010Fetcher, it's
gone. So how Kafka09Fetcher implements multi-threads read partitions from
kafka?
2017-12-08 18:25 GMT+08:00 Stefan Richter :
> You need to be a bit careful if your sink needs exactly-once semantics. In
> this case things should either be idempotent or the db must support rolling
> back changes between checkpoints, e.g. via transactions. Commits
Hi group,
We have the following graph below, on which we added metrics for latency
calculation.
We have two streams which are consumed by two operators:
* ordersStream and pricesStream - they are both consumed by two
operators: CoMapperA and CoMapperB, each using connect.
Initially