Did not know that quotas landed in 0.9. Very nice!
Being able to throttle clients that don't have real-time SLAs (in favor of
those who do) is a great addition.
Thanks for that Grant.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Yes, I thought you weren't interested in retention, but how to limit the
amount of messages produced into a topic.
Take a look at this Kafka Improvement Proposal (KIP):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-13+-+Quotas
But, AFAIK, there's nothing currently available for your use
Quotas (KIP-13) is actually included in the recent 0.9.0 release. More
about them can be read in the documentation here:
- http://kafka.apache.org/documentation.html#design_quotas
- http://kafka.apache.org/documentation.html#quotas
On Sun, Nov 29, 2015 at 9:24 AM, Marko Bonaći
Hi,
Can some one please let me know the following:-
1. Is it possible to specify maximum length of a particular topic ( in
terms of number of messages ) in kafka ?
2. Also how does Kafka behave when a particular topic gets full?
3. Can the producer be blocked if a topic get full
AFAIK there is no such notion as maximum length of a topic, i.e. offset has
no limit, except Long.MAX_VALUE I think, which should be enough for a
couple of lifetimes (9 * 10E18, or quintillion or million trillions).
What would be the purpose of that, besides being a nice foot-gun :)
Marko Bonaći
Kafka server has a data retention policy based on either time or #.message
(e.g. Kafka brokers will automatically delete the oldest data segment if
its oldest data has been xx milliseconds ago, of if its total log size has
exceed yy MBs, with threshold values configurable).
The producer clients
Let me explain my use case:-
We have a ELK setup in which logstash-forwarders pushes logs from different
services to a logstash. The logstash then pushes them to kafka. The
logstash consumer then pulls them out of Kafka and indexes them to
Elasticsearch cluster.
We are trying to ensure that no