Khaireddine Rezgui commented on KAFKA-6400:

I got sometimes the same experience in my first using of kafka stream, i 
understand now the issue.

Does CACHE_MAX_BYTES_BUFFERING_CONFIG is the config mentioned in  the 
description ?


> Consider setting default cache size to zero in Kafka Streams
> ------------------------------------------------------------
>                 Key: KAFKA-6400
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6400
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>    Affects Versions: 1.0.0
>            Reporter: Matthias J. Sax
>            Priority: Minor
> Since the introduction of record caching in Kafka Streams DSL, we see regular 
> reports/questions of first times users about "Kafka Streams does not emit 
> anything" or "Kafka Streams loses messages". Those report are subject to 
> record caching but no bugs and indicate bad user experience.
> We might consider setting the default cache size to zero to avoid those 
> issues and improve the experience for first time users. This hold especially 
> for simple word-count-demos (Note, many people don't copy out example 
> word-count but build their own first demo app.)
> Remark: before we had caching, many users got confused about our update 
> semantics and that we emit an output record for each input record for 
> windowed aggregation (ie, please give me the "final" result"). Thus, we need 
> to consider this and judge with care to not go "forth and back" with default 
> user experience -- we did have less questions about this behavior lately.

This message was sent by Atlassian JIRA

Reply via email to