[ https://issues.apache.org/jira/browse/KAFKA-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927392#comment-15927392 ]
Jiangjie Qin commented on KAFKA-4907: ------------------------------------- [~junrao] Thanks for the explanation. Yes, that does sound an issue. But it also doesn't seem ideal to simply accept any timestamp for a log compacted topic. It looks there are two scenarios: 1. Users are mirroring a log compacted topic to a new cluster. In this case the broker should just accept any timestamp. 2. Users are producing real time messages into a log compacted topic. In this case the broker should reject a timestamp that is out of the message.timestamp.difference.max.ms. I am not sure what is the best way to address both cases. Because the broker cannot distinguish between the two scenarios, it seems that manual configuration is necessary. i.e. in case 1 the users will have to manually change the message.timestamp.difference.max.ms to Long.MAX_VALUE and delete the configuration after the mirror has caught up. There might be a way for the broker to automatically change the configuration by guessing what the user is doing. For example, let the broker accept any timestamp until the broker sees a timestamp that falls into the acceptable range (assuming it has caught up). But this seems not intuitive and is not guaranteed to work given the timestamp can actually be out of order. > compacted topic shouldn't reject messages with old timestamp > ------------------------------------------------------------ > > Key: KAFKA-4907 > URL: https://issues.apache.org/jira/browse/KAFKA-4907 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.10.1.0 > Reporter: Jun Rao > > In LogValidator.validateTimestamp(), we check the validity of the timestamp > in the message without checking whether the topic is compacted or not. This > can cause messages to a compacted topic to be rejected when it shouldn't. -- This message was sent by Atlassian JIRA (v6.3.15#6346)