OK. tnx!
On Fri, 15 Dec 2017 at 15:08 Damian Guy wrote:
> I believe that just controls when the segment gets deleted from disk. It is
> removed from memory before that. So i don't believe that will help.
>
> On Fri, 15 Dec 2017 at 13:54 Wim Van Leuven <
> wim.vanleu...@highestpoint.biz>
> wrote:
I believe that just controls when the segment gets deleted from disk. It is
removed from memory before that. So i don't believe that will help.
On Fri, 15 Dec 2017 at 13:54 Wim Van Leuven
wrote:
> So, in our setup, to provide the historic data on the platform, we would
> have to define all topic
So, in our setup, to provide the historic data on the platform, we would
have to define all topics with a retention period of the business time we
want to keep the data. However, on the intermediate topics, we would only
require the data to be there as long as necessary to be able to process the
da
Is it really? I checked some records on kafka topics using commandline
consumers to print key and timestamps and timestamps was logged as
CreateTime:1513332523181
But that would explain the issue. I'll adjust the retention on the topic
and rerun.
Thank you already for the insights!
-wim
On Fri,
Hi,
It is likely due to the timestamps you are extracting and using as the
record timestamp. Kafka uses the record timestamps for retention. I suspect
this is causing your segments to roll and be deleted.
Thanks,
Damian
On Fri, 15 Dec 2017 at 11:49 Wim Van Leuven
wrote:
> Hello all,
>
> We are
Hello all,
We are running some Kafka Streams processing apps over Confluent OS
(v3.2.0) and I'm seeing unexpected but 'consitent' behaviour regarding
segment and index deletion.
So, we have a topic 'input' that contains about 30M records to ingest. A
1st processor transforms and pipes the data on