[
https://issues.apache.org/jira/browse/BEAM-11986?focusedWorklogId=671057&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671057
]
ASF GitHub Bot logged work on BEAM-11986:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Oct/21 21:32
Start Date: 27/Oct/21 21:32
Worklog Time Spent: 10m
Work Description: pabloem commented on pull request #15294:
URL: https://github.com/apache/beam/pull/15294#issuecomment-953327915
oh yeah I know why. `MetricsEnvironment` is a runtime class that a runner
utilizes at runtime. `MetricsEnvironment` will give you metrics info if the job
runs locally, but not if the job runs on Dataflow.
After looking at other metrics tests, they all test mocked-out instances of
BQ/BigTable, etc.
So - I think you should make these tests run only on DirectRunner, and skip
on Dataflow.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 671057)
Time Spent: 17h 20m (was: 17h 10m)
> Python Spanner - Implement IO Request Count metric
> --------------------------------------------------
>
> Key: BEAM-11986
> URL: https://issues.apache.org/jira/browse/BEAM-11986
> Project: Beam
> Issue Type: Test
> Components: io-py-gcp
> Reporter: Alex Amato
> Priority: P3
> Time Spent: 17h 20m
> Remaining Estimate: 0h
>
> Reference PRs (See BigQuery IO example) and detailed explanation of what's
> needed to instrument this IO with Request Count metrics is found in this
> handoff doc:
> [https://docs.google.com/document/d/1lrz2wE5Dl4zlUfPAenjXIQyleZvqevqoxhyE85aj4sc/edit'|https://docs.google.com/document/d/1lrz2wE5Dl4zlUfPAenjXIQyleZvqevqoxhyE85aj4sc/edit'?authuser=0]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)