ableegoldman opened a new pull request #8564:
URL: https://github.com/apache/kafka/pull/8564


   These two options are essentially incompatible, as caching will do nothing 
to reduce downstream traffic and writes when it has to allow non-unique keys 
(skipping records where the value is also the same is a separate issue, see 
[KIP-557](https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams)).
 But enabling caching on a store that's configured to retain duplicates is 
actually more than just ineffective, and currently causes incorrect results.
   
   We should just log a warning and disable caching whenever a store is 
retaining duplicates to avoid introducing a regression. Maybe when 3.0 comes 
around we should consider throwing an exception instead to alert the user more 
aggressively.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to