[ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17034592#comment-17034592
 ] 

Jun Rao commented on KAFKA-7061:
--------------------------------

[~senthilm-ms] : Sorry for the delay. I will review the PR this week.

> Enhanced log compaction
> -----------------------
>
>                 Key: KAFKA-7061
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7061
>             Project: Kafka
>          Issue Type: Improvement
>          Components: core
>    Affects Versions: 2.5.0
>            Reporter: Luis Cabral
>            Assignee: Senthilnathan Muthusamy
>            Priority: Major
>              Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
> config yields to the existing "log.cleanup.policy", i.e. if the latter's 
> value is `delete` not `compact`, then the previous config would be ignored.
> +From Jun Rao:+ With the timestamp/header strategy, the behavior of the 
> application may need to change. In particular, the application can't just 
> blindly take the record with a larger offset and assuming that it's the value 
> to keep. It needs to check the timestamp or the header now. So, it would be 
> useful to at least document this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to