[
https://issues.apache.org/jira/browse/KAFKA-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855773#comment-17855773
]
Vinicius Vieira dos Santos edited comment on KAFKA-16986 at 6/18/24 11:54 AM:
------------------------------------------------------------------------------
[~jolshan] The only problem I currently see is the log, I took a look and
actually many of our applications log this message and this pollutes the logs
from time to time, I don't know exactly the process that triggers this log, but
it is displayed several times during the pod life cycle not only at startup,
the print I added to the issue shows this, for the same topic and the same
partition there are several logs at different times in the same pod, without
restarts or anything like that and I think it's important to emphasize that
throughout In the life cycle of these applications we only have one producer
instance that remains the same throughout the life of the pod. I even validated
the code of our applications to check that there wasn't a situation where the
producer kept being destroyed and created again.
I will leave below the occurrences in the log of the same pod that has been
operating for 2 days: https://pastecode.io/s/zn1u118d
was (Author: JIRAUSER305851):
[~jolshan] The only problem I currently see is the log, I took a look and
actually many of our applications log this message and this pollutes the logs
from time to time, I don't know exactly the process that triggers this log, but
it is displayed several times during the pod life cycle not only at startup,
the print I added to the issue shows this, for the same topic and the same
partition there are several logs at different times in the same pod, without
restarts or anything like that and I think it's important to emphasize that
throughout In the life cycle of these applications we only have one producer
instance that remains the same throughout the life of the pod. I even validated
the code of our applications to check that there wasn't a situation where the
producer kept being destroyed and created again.
> After upgrading to Kafka 3.4.1, the producer constantly produces logs related
> to topicId changes
> ------------------------------------------------------------------------------------------------
>
> Key: KAFKA-16986
> URL: https://issues.apache.org/jira/browse/KAFKA-16986
> Project: Kafka
> Issue Type: Bug
> Components: clients, producer
> Affects Versions: 3.0.1
> Reporter: Vinicius Vieira dos Santos
> Priority: Minor
> Attachments: image.png
>
>
> When updating the Kafka broker from version 2.7.0 to 3.4.1, we noticed that
> the applications began to log the message "{*}Resetting the last seen epoch
> of partition PAYMENTS-0 to 0 since the associated topicId changed from null
> to szRLmiAiTs8Y0nI8b3Wz1Q{*}" in a very constant, from what I understand this
> behavior is not expected because the topic was not deleted and recreated so
> it should simply use the cached data and not go through this client log line.
> We have some applications with around 15 topics and 40 partitions which means
> around 600 log lines when metadata updates occur
> The main thing for me is to know if this could indicate a problem or if I can
> simply change the log level of the org.apache.kafka.clients.Metadata class to
> warn without worries
>
> There are other reports of the same behavior like this:
> https://stackoverflow.com/questions/74652231/apache-kafka-resetting-the-last-seen-epoch-of-partition-why
>
> *Some log occurrences over an interval of about 7 hours, each block refers to
> an instance of the application in kubernetes*
>
> !image.png!
> *My scenario:*
> *Application:*
> - Java: 21
> - Client: 3.6.1, also tested on 3.0.1 and has the same behavior
> *Broker:*
> - Cluster running on Kubernetes with the bitnami/kafka:3.4.1-debian-11-r52
> image
>
> If you need any more details, please let me know.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)