I believe this is where the metrics are supplied:
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/worker/operations.py

git grep process_bundle_msecs   yields results for dataflow worker only

There isn't any test coverage for the Flink runner:

https://github.com/apache/beam/blob/d38645ae8758d834c3e819b715a66dd82c78f6d4/sdks/python/apache_beam/runners/portability/flink_runner_test.py#L181



On Wed, Apr 3, 2019 at 10:45 AM Akshay Balwally <[email protected]> wrote:

> Should have added- I'm using Python sdk, Flink runner
>
> On Wed, Apr 3, 2019 at 10:32 AM Akshay Balwally <[email protected]>
> wrote:
>
>> Hi,
>> I'm hoping to get metrics on the amount of time spent on each operator,
>> so it seams like the stat
>>
>>
>> {organization_specific_prefix}.operator.beam-metric-pardo_execution_time-process_bundle_msecs-v1.gauge.mean
>>
>> would be pretty helpful. But in practice, this stat always shows 0, which
>> I interpret as 0 milliseconds spent per bundle, which can't be correct
>> (other stats show that the operators are running, and timers within the
>> operators show more reasonable times). Is this a known bug?
>>
>>
>> --
>> *Akshay Balwally*
>> Software Engineer
>> 937.271.6469 <+19372716469>
>> [image: Lyft] <http://www.lyft.com/>
>>
>
>
> --
> *Akshay Balwally*
> Software Engineer
> 937.271.6469 <+19372716469>
> [image: Lyft] <http://www.lyft.com/>
>

Reply via email to