I think that in the spirit of Codahale/Dropwizard metrics-like API, the
question is do we want to have something like ScheduledReporter
<http://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/ScheduledReporter.html>
as
a contract to collect and report the metrics to different monitoring
systems (e.g., Graphite, Ganglia, etc.).


On Mon, Jan 2, 2017 at 8:07 PM Stas Levin <[email protected]> wrote:

> I see.
>
> Just to make sure I get it right, in (2), by sinks I mean various metrics
> backends (e.g., Graphite). So it boils down to having integration tests as
> part of Beam (runners?) that beyond testing the SDK layer (i.e., asserting
> over pipeline.metrics()) and actually test the specific metrics backend
> (i.e., asserting over inMemoryGraphite.metrics()), right?
>
> On Mon, Jan 2, 2017 at 7:14 PM Davor Bonaci <[email protected]> wrote:
>
> > Sounds like we should do both, right?
> >
> > 1. Test the metrics API without accounting for the various sink types,
> i.e.
> > > against the SDK.
> > >
> >
> > Metrics API is a runner-independent SDK concept. I'd imagine we'd want to
> > have runner-independent test that interact with the API, outside of any
> > specific transform implementation, execute them on all runners, and query
> > the results. Goal: make sure Metrics work.
> >
> > 2. Have the sink types, or at least some of them, tested as part of
> > > integration tests, e.g., have an in-memory Graphite server to test
> > Graphite
> > > metrics and so on.
> > >
> >
> > This is valid too -- this is testing *usage* of Metrics API in the given
> > IO. If a source/sink, or a transform in general, is exposing a metric,
> that
> > metric should be tested in its own right as a part of the transform
> > implementation.
> >
>

Reply via email to