[ 
https://issues.apache.org/jira/browse/BEAM-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816622#comment-16816622
 ] 

Alex Amato commented on BEAM-7058:
----------------------------------

We have an end to end integration test successfully collecting these metrics in 
Dataflow python and several unit tests, These tests do force some sleep times 
though.

So the the python SDK is emitting the metrics in that case. One theory is that 
small bundles may not trigger the state sampler code properly. Or the 
particular test is too small, and executes too fast to exercise the sampling 
code at all (and it really isn't running much). We should test this on a test 
with a high element count.

If that's the case, the state sampler should be setup to trigger intervals 
periodically, and not reset the interval on new bundles.


It could be related to the behaviour of this specific test. We would need 
someone to debug this test, is there a repro?

 

 

> Python SDK metric process_bundle_msecs reported as zero
> -------------------------------------------------------
>
>                 Key: BEAM-7058
>                 URL: https://issues.apache.org/jira/browse/BEAM-7058
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-flink, sdk-py-harness
>            Reporter: Thomas Weise
>            Assignee: Alex Amato
>            Priority: Major
>              Labels: portability-flink
>
> With the portable Flink runner, the metric is reported as 0, while the count 
> metric works as expected.
> [https://lists.apache.org/thread.html/25eec8104bda6e4c71cc6c5e9864c335833c3ae2afe225d372479f30@%3Cdev.beam.apache.org%3E]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to