[ 
https://issues.apache.org/jira/browse/BEAM-3310?focusedWorklogId=137433&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-137433
 ]

ASF GitHub Bot logged work on BEAM-3310:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Aug/18 15:50
            Start Date: 23/Aug/18 15:50
    Worklog Time Spent: 10m 
      Work Description: JozoVilcek edited a comment on issue #4548: [BEAM-3310] 
Metrics pusher
URL: https://github.com/apache/beam/pull/4548#issuecomment-415467250
 
 
   True, metrics extraction vs collections does make a difference. Maybe only 
my understanding / interpretation of what is written in design docs is 
different. 
   
   I personally do not look for a single point to collect all global aggregated 
metrics as much as having a way of bull beam experience of generating and 
delivering metrics to a time-series database for monitoring. Aggregated or 
partly aggregated, I do would not mid. 
   I would be fine with per task manager collector / reporter similar as flink 
natively does. Right now one must submit metrics in one API (beam) and report 
them via runner internal API and translate models. Also in flink reports much 
more dimensions on those metrics which are kind of blurred away by beam model 
but one have to handle this in reporting. By those I mean things like 
`operator_id`, `tm_id`, `task_attempt_num`, `subtask_index` etc. 
   
   But full aggregate sounds good too. But sounds kind of internal to flink how 
to support job choosing and running single authority to report to

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 137433)
    Time Spent: 15h 50m  (was: 15h 40m)

> Push metrics to a backend in an runner agnostic way
> ---------------------------------------------------
>
>                 Key: BEAM-3310
>                 URL: https://issues.apache.org/jira/browse/BEAM-3310
>             Project: Beam
>          Issue Type: New Feature
>          Components: runner-extensions-metrics, sdk-java-core
>            Reporter: Etienne Chauchot
>            Assignee: Etienne Chauchot
>            Priority: Major
>          Time Spent: 15h 50m
>  Remaining Estimate: 0h
>
> The idea is to avoid relying on the runners to provide access to the metrics 
> (either at the end of the pipeline or while it runs) because they don't have 
> all the same capabilities towards metrics (e.g. spark runner configures sinks 
>  like csv, graphite or in memory sinks using the spark engine conf). The 
> target is to push the metrics in the common runner code so that no matter the 
> chosen runner, a user can get his metrics out of beam.
> Here is the link to the discussion thread on the dev ML: 
> https://lists.apache.org/thread.html/01a80d62f2df6b84bfa41f05e15fda900178f882877c294fed8be91e@%3Cdev.beam.apache.org%3E
> And the design doc:
> https://s.apache.org/runner_independent_metrics_extraction



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to