Not really... If you don't clean-up, you have to delete or the logs will
grow indefinitely.
Is there a Jira for the windows issue?
Also, is there a way to avoid windows until this is resolved? Docker
containers perhaps?
On Sun, Dec 30, 2018, 11:49 PM lk gen The original issue is that in windows
[1] has the following code to demonstrate the usage of suppress method.
KGroupedStream grouped = ...;
grouped
.windowedBy(TimeWindows.of(Duration.ofHours(1)).grace(ofMinutes(10)))
.count()
.suppress(Suppressed.untilWindowCloses(unbounded()))
.filter((windowedUserId, count) -> count < 3)
Hi all,
I made some recent changes to the KIP. It should be more relevant with the
issue now (involves Processor API in detail).
It would be great if you could comment.
Thanks,
Richard
On Wed, Dec 26, 2018 at 10:01 PM Richard Yu
wrote:
> Hi all,
>
> Just changing the title of the KIP. Discover
Hi All,
Could you please grant me permission to create content so that I can create a
KIP?
Many Thanks,
Jamie
The original issue is that in windows the compaction cleanup is causing the
Kafka process to crash due to file handling, in order to avoid it, I tried
to disable the compaction cleanup, but that causes the consumer offsets log
to keep increasing, is there a way to work with zookeeper for consumer
o
Andrew Schofield created KAFKA-7776:
---
Summary: Kafka Connect values converter parsing of ISO8601 not
working properly
Key: KAFKA-7776
URL: https://issues.apache.org/jira/browse/KAFKA-7776
Project: K
Ludo created KAFKA-7775:
---
Summary: [KStream] remove topic prefix from consumer configuration
to resolve unnecessary warning
Key: KAFKA-7775
URL: https://issues.apache.org/jira/browse/KAFKA-7775
Project: Kafka
Thanks Boyang,
If there aren't any more thoughts on the KIP I'll start a vote thread in
the new year
On Sat, Dec 29, 2018 at 12:58 AM Boyang Chen wrote:
> Yep Stanislav, that's what I'm proposing, and your explanation makes sense.
>
> Boyang
>
>
> From: Stanisla
Depending on how many consumer groups and partitions you have and how often
you commit, you risk either running out of disk space and/or deleting
commit information that you will need.
Either way, you will be storing lots of records you don't need.
Only do this if there is no other solution to wh
Hi,
the consumer offsets Kafka internal topic is always created with a
compact cleanup policy
If altering the consumer offsets topic policy from compact to delete in a
specific installed environment, will it cause problems ? will the consumer
still work if the consumer offsets are set to dele
Nancy created KAFKA-7774:
Summary: Decimal conversion exception by kafka
Key: KAFKA-7774
URL: https://issues.apache.org/jira/browse/KAFKA-7774
Project: Kafka
Issue Type: Bug
Reporter: Nan
11 matches
Mail list logo