AFAIK, Kafka group coordinators are supposed to always be marked dead, because
we use static assignment internally and therefore Kafka's group coordination
functionality is disabled.
Though it may be obvious, but to get that out of the way first: are you sure
that the Kafka installation version matches (i.e. 0.10.0.1)?
On 11 August 2017 at 6:43:51 PM, AndreaKinn (kinn6...@hotmail.it) wrote:
In the last week I have correctly deployed a flink program which get data
from a kafka broker on my local machine.
Now I'm trying to produce the same thing but moving the kafka broker on a
I didn't change any line of code, I report it here:
DataStream<Tuple2<String,JSONLDObject>> stream = env
.addSource(new FlinkKafkaConsumer010<>(TOPIC, new CustomDeserializer(),
While I have changed just the Kafka Ip.
Data model obviously is not changed.
Unfortunately now when I start Flink program I get this:
INFO org.apache.kafka.common.utils.AppInfoParser - Kafka
version : 0.10.0.1
12:30:48,446 INFO org.apache.kafka.common.utils.AppInfoParser
- Kafka commitId : a7a17cdec9eaa6c5
Discovered coordinator giordano-1-4-200:9092 (id: 2147483647 rack: null) for
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - *Marking
the coordinator giordano-1-4-200:9092 (id: 2147483647 rack: null) dead for
I bolded the line that worry me.
Then, no data are retrieved buy Kafka although flink continue to perform
checkpointing etc normally...
View this message in context:
Sent from the Apache Flink User Mailing List archive. mailing list archive at