Hi Madan,
Perhaps you can filter out inactive topics in the client first and then
pass the filtered list of topics to KafkaConsumer.
Best,
Feng
On Tue, Nov 7, 2023 at 10:42 AM Madan D via user
wrote:
> Hello Hang/Lee,
> Thanks!
> In my usecase we listen from multiple topics but in few cases o
Hello Hang/Lee,Thanks!In my usecase we listen from multiple topics but in few cases one of the topic may become inactive if producer decides to shutdown one of the topic but other topics still will be receiving data but what we observe is that if there’s one of the topic is getting in-active entire
Hi, Madan.
This error seems like that there are some problems when the consumer tries
to read the topic metadata. If you use the same source for these topics,
the kafka connector cannot skip one of them. As you say, you need to modify
the connector's default behavior.
Maybe you should read the cod
Hi Madan,
Do you mean you want to restart only the failed tasks, rather than
restarting the entire pipeline region? As far as I know, currently Flink
does not support task-level restart, but requires restarting the pipeline
region.
Best,
Junrui
Madan D via user 于2023年10月11日周三 12:37写道:
> Hello
Hello Team, We are running the Flink pipeline by consuming data from multiple
topics, but we recently encountered that if there's one topic having issues
with participation, etc., the whole Flink pipeline is failing, which is
affecting topics. Is there a way we can make Flink Piplein keep runnin
Hi, Kenan.
Maybe you should set the `client.id.prefix` to avoid the conflict.
Best,
Hang
liu ron 于2023年7月31日周一 19:36写道:
> Hi, Kenan
>
> After studying the source code and searching google for related
> information, I think this should be caused by duplicate client_id [1], you
> can check if th
Hi, Kenan
After studying the source code and searching google for related
information, I think this should be caused by duplicate client_id [1], you
can check if there are other jobs using the same group_id in consuming this
topic. group_id is used in Flink to assemble client_id [2], if there are
Any help is appreciated about the exception below.
Also my Kafkasource code is below. The parallelism is 16 for this task.
KafkaSource sourceStationsPeriodic = KafkaSource.<
String>builder()
.setBootstrapServers(parameter.get(
KAFKA_SOURCE_STATIONS_B
Hi Shannon,
Some questions:
which Flink version are you using?
Can you provide me with some more logs, in particular the log entries
before this event from the Kafka connector.
Also, it is possible that the Kafka broker was in an erroneous state?
Did the error happen after weeks of data consump
Does anyone have a guess what might cause this exception?
java.lang.RuntimeException: Unable to find a leader for partitions:
[FetchPartition {topic=usersignals, partition=1, offset=2825838}]
at
org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyF
10 matches
Mail list logo