[ https://issues.apache.org/jira/browse/KAFKA-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023619#comment-16023619 ]
Jiangjie Qin commented on KAFKA-4340: ------------------------------------- [~edenhill] I think one of the well established guarantee in producing is that either the entire batch succeeds or the entire batch fails. A lot of the code are actually built on top of this guarantee, for example the callback and offsets in the callback. So if one of the messages in the batch had problem, it is by current design the entire batch will fail. I do agree that in this case the producer does not really have much to do, but that seems not a problem introduced by this patch. Silently discard the message on the broker side will introduce other problems because the producer acks are on a batch. The producer will assume the message has been successfully sent while it has been removed. And this also breaks the log retention enforcing order which always remove an old segment before deleting messages in newer segments. > Change the default value of log.message.timestamp.difference.max.ms to the > same as log.retention.ms > --------------------------------------------------------------------------------------------------- > > Key: KAFKA-4340 > URL: https://issues.apache.org/jira/browse/KAFKA-4340 > Project: Kafka > Issue Type: Improvement > Components: core > Affects Versions: 0.10.1.0 > Reporter: Jiangjie Qin > Assignee: Jiangjie Qin > Fix For: 0.11.0.0 > > > [~junrao] brought up the following scenario: > If users are pumping data with timestamp already passed log.retention.ms into > Kafka, the messages will be appended to the log but will be immediately > rolled out by log retention thread when it kicks in and the messages will be > deleted. > To avoid this produce-and-deleted scenario, we can set the default value of > log.message.timestamp.difference.max.ms to be the same as log.retention.ms. -- This message was sent by Atlassian JIRA (v6.3.15#6346)