Hi all,

I'm trying to implement a Flink consumer which consumes a Kafka topic with
3 partitions. I've set the parallelism of the execution environment to 3 as
I want to make sure that each Kafka partition is consumed by a separate
parallel task in Flink. My first question is whether it's always guaranteed
to have a one-to-one mapping between Kafka partitions and Flink tasks in
this setup?

So far, I've just setup a single Kafka broker and created a topic with 3
partitions and tried to consume it from my flink application with
parallelism set to 3 (all on same machine). I see 3 parallel processes of
each operation being created on Flink log. However, when I execute the
Flink job, messages from all 3 Kafka partitions are consumed by a single
task (Process (3/3)). Other two parallel tasks are idling. Am I mission
something here? In addition to setting the parallelism, is there any other
configuration that I have to do here?

Here are the details about my setup.

Kafka version: 0.10.2.1
Flink version: 1.3.1
Connector: FlinkKafkaConsumer010

Thanks,
Isuru

Reply via email to