I think we can handle the failures selectively e.g. if there are issues
with downstream database server then all the events will fail to process so
it will be worth to keep retrying. Else if there is issue only while
processing a particular event, then we can keep retry timeout and after
that
I see.
Then I think the appropriate approach depends on your delivery latency
requirements.
Just retrying until success is simpler but it could block subsequent
messages to get processed. (also depends on thread pool size though)
Then another concern when using dead letter topic would be
Thanks Haruki... right now the max of such types of events that we would
have is 100 since we would be supporting those many customers (accounts)
for now, for which we are considering a simple approach of a single
consumer and a thread pool with around 10 threads. So the question was
regarding how
Hi Pushkar.
Just for your information, https://github.com/line/decaton is a Kafka
consumer framework that supports parallel processing per single partition.
It manages committable (i.e. the offset that all preceding offsets have
been processed) offset internally so that preserves at-least-once
Thanks Liam!
We don't have a requirement to maintain order of processing for events even
within a partition. Essentially, these are events for various accounts
(customers) that we want to support and do necessary database provisioning
for those in our database. So they can be processed in
Hi Pushkar,
No. You'd need to combine a consumer with a thread pool or similar as you
prefer. As the docs say (from
https://kafka.apache.org/26/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
)
We have intentionally avoided implementing a particular threading model for
>
Hi,
Is there any configuration in kafka consumer to specify multiple threads
the way it is there in kafka streams?
Essentially, can we have a consumer with multiple threads where the threads
would divide partitions of topic among them?