Hi,
 
>> ...AND deactivate the key-based log-cleaner on the
>> broker so that it does not delete older records
>> that have the same key?

> How old records are cleaned is independent of what you do with
> processed records. You usually retain them for enough time so
> you don't loose them before processing them
> + some safety time...

Yes, I got that.

My wording was not sharp enough. I realized now that what I really meant here 
was log compaction. But log compaction would only ever get activated if one 
would set cleanup.policy=compact for a topic or perhaps as a default for 
topics. So I do not have to be worried about log compaction when giving each 
ProducerRecord the device UUID as key, as long as it does not get activated.

Glad to see that the approach seems valid.

Thank you and Cheers!
Sven
 

Gesendet: Freitag, 11. Januar 2019 um 12:43 Uhr
Von: "Peter Levart" <peter.lev...@gmail.com>
An: users@kafka.apache.org, "Sven Ludwig" <s_lud...@gmx.de>
Betreff: Re: Aw: Re: Doubts in Kafka

On 1/10/19 2:26 PM, Sven Ludwig wrote:
> Okay, but
>
> what if one also needs to preserve the order of messages coming from a 
> particular device?
>
> With Kafka, this is perhaps possible if all messages from a particular device 
> go into the same partition.
>
> Would it be a good and efficient solution for this approach to set the key of 
> each Kafka ProducerRecord to the unique ID of the Device

Exactly!

> AND deactivate the key-based log-cleaner on the broker so that it does not 
> delete older records that have the same key?

How old records are cleaned is independent of what you do with processed
records. You usually retain them for enough time so you don't loose them
before processing them + some safety time...

Regards, Peter

>
> Sven
>
>
> Gesendet: Donnerstag, 10. Januar 2019 um 08:35 Uhr
> Von: "Peter Levart" <peter.lev...@gmail.com>
> An: users@kafka.apache.org, "aruna ramachandran" <arunaeie...@gmail.com>
> Betreff: Re: Doubts in Kafka
> Hi Aruna,
>
> On 1/10/19 8:19 AM, aruna ramachandran wrote:
>> I am using keyed partitions with 1000 partitions, so I need to create 1000
>> consumers because consumers groups and re balancing concepts is not worked
>> in the case of manually assigned consumers.Is there any replacement for the
>> above problem.
>>
> What API are you using in the KafkaConsumer? Are you using
> subscribe(Collection<String> topics) or are you using
> assign(Collection<TopicPartition> partitions) ?
>
> The 1st one (subscribe) is the one you should be using for your usecase.
> With that call, when you subscribe to a multi-partition topic and you
> have multiple KafkaConsumer(s) configured with the same consumer group
> id, then partitions of the topic are dynamically assigned (and possibly
> reassigned when consumers come or go) to a set of live consumers. Will
> this work for you (and why not)?
>
> Regards, Peter
 

Reply via email to