Re: Total Volume metrics of Kafka

2019-01-17 Thread Gabriele Paggi
On Thu, 17 Jan 2019 at 00:44, Peter Bukowinski wrote: > On each broker, we have a process (scheduled with cron) that polls the > kafka jmx api every 60 seconds. It sends the metrics data to graphite ( > https://graphiteapp.org). We have graphite configured as a data source > for grafana

Re: Total Volume metrics of Kafka

2019-01-16 Thread Peter Bukowinski
On each broker, we have a process (scheduled with cron) that polls the kafka jmx api every 60 seconds. It sends the metrics data to graphite (https://graphiteapp.org). We have graphite configured as a data source for grafana (https://grafana.com) and use it to build various dashboards to

Re: Total Volume metrics of Kafka

2019-01-16 Thread Amitav Mohanty
Peter, Thanks for the inputs. I am interested in aggregate bytes published into a topic. The approach of metrics collector along with graphing tool seems appealing. I can volume ingested over arbitrary periods of time which is exactly what I am looking for. Can you please point to some metrics

Re: Total Volume metrics of Kafka

2019-01-16 Thread Peter Bukowinski
Amitav, When you say total volume, do you want a topic’s size on disk, taking into account replication and retention, or do you want the aggregate bytes published into a topic? If you have a metrics collector and a graphing tool such as grafana, you can transform the rate metrics to a byte sum