Hi Gyula,
I did not meant to say the MetricsReporter code should stay with the metric
storage, e.g. Kafka. I am trying to argue that metric reporter is a plugin
less related to Flink, but more related to a specific user environment. If
we include such plugins into Flink, as time goes, the user
So this probably doesn't belong in this thread, but here goes:
When you think of the metric system as source and reporters and sinks,
one has to consider what he source emits:
Either:
a) events for added/removed metrics
b) periodically emit the values of all metrics, with the plethora of
@Becket , Yun:
Regarding the core/ecosystem project:
I don't completely agree with your arguments regarding why this should be
an external ecosystem project instead of part of the Flink repo.
A metric connector is relevant for the Flink users, not the metric store.
Metric storage systems don't
@Bowen I can see where you're coming from, but I don't think this would
work too well. Your "stream" would have to contain events for
added/removed metrics, but metrics are inherently not Serializable. I
think this would end up being a weird special case.
(Periodically emitting the values of
Hi,
What still unclear to me so far is - As I don't see any yet., what would be
the fundamental differences between this Kafka reporter and Flink’s
existing Kafka producer?
I’ve been thinking of Flink metrics for a while, and the “metric reporter”
feels a bit redundant to me. As you may already
Hi all
Glad to see this topic in community.
We at Alibaba also implemented a kafka metrics reporter and extend it to other
message queues like Alibaba cloud log service [1] half a year ago. The reason
why we not launch a similar discussion is that we previously thought we only
provide a way
Hi Gyula,
Thanks for bringing this up. It is a useful addition to have a Kafka
metrics reporter. I understand that we already have Prometheus and DataDog
reporters in the Flink main repo. However, personally speaking, I would
slightly prefer to have the Kafka metrics reporter as an ecosystem
Hi Gyula,
thank you for proposing this. +1 for adding a KafkaMetricsReporter. In
terms of the dependency we could go a similar route as for the "universal"
Flink Kafka Connector which to my knowledge always tracks the latest Kafka
version as of the Flink release and relies on compatibility of the
Hi all!
Several users have asked in the past about a Kafka based metrics reporter
which can serve as a natural connector between arbitrary metric storage
systems and a straightforward way to process Flink metrics downstream.
I think this would be an extremely useful addition but I would like to