[ 
https://issues.apache.org/jira/browse/KAFKA-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16529386#comment-16529386
 ] 

Badai Aqrandista commented on KAFKA-6939:
-----------------------------------------

[~bgummalla]

Yes, this issue still exist in the code because Kafka will assume the timestamp 
is in millisecond. But if customer incorrectly assume it's in second or 
microsecond, Kafka will still accept it but the message will be deleted 
immediately (if timestamp is in second) or generate a lot of segment files (if 
timestamp is in microsecond).

So, a reasonable default for {{log.message.timestamp.difference.max.ms}} should 
prevent user from these incorrect assumption. Especially when Kafka client is 
not Java based, with default time library generating times not in millisecond.

Thanks
Badai

> Change the default of log.message.timestamp.difference.max.ms to 500 years
> --------------------------------------------------------------------------
>
>                 Key: KAFKA-6939
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6939
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Badai Aqrandista
>            Priority: Minor
>
> If producer incorrectly provides timestamp in microsecond (not in 
> millisecond), the record is accepted by default and can cause broker to roll 
> the segment files continuously. And on a heavily used broker, this will 
> generate a lot of index files, which then causes the broker to hit 
> `vm.max_map_count`.
> So I'd like to suggest changing the default for 
> log.message.timestamp.difference.max.ms to 15768000000000 (500 years * 365 
> days * 86400 seconds * 1000). This would reject timestamp in microsecond from 
> producer and still allow most historical data to be stored in Kafka.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to