On Thu, 17 Jan 2019 at 00:44, Peter Bukowinski wrote:
> On each broker, we have a process (scheduled with cron) that polls the
> kafka jmx api every 60 seconds. It sends the metrics data to graphite (
> https://graphiteapp.org). We have graphite configured as a data source
> for grafana
On each broker, we have a process (scheduled with cron) that polls the kafka
jmx api every 60 seconds. It sends the metrics data to graphite
(https://graphiteapp.org). We have graphite configured as a data source for
grafana (https://grafana.com) and use it to build various dashboards to
Peter,
Thanks for the inputs. I am interested in aggregate bytes published into a
topic. The approach of metrics collector along with graphing tool seems
appealing. I can volume ingested over arbitrary periods of time which is
exactly what I am looking for. Can you please point to some metrics
Amitav,
When you say total volume, do you want a topic’s size on disk, taking into
account replication and retention, or do you want the aggregate bytes published
into a topic? If you have a metrics collector and a graphing tool such as
grafana, you can transform the rate metrics to a byte sum