[ 
https://issues.apache.org/jira/browse/BEAM-11092?focusedWorklogId=513113&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-513113
 ]

ASF GitHub Bot logged work on BEAM-11092:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Nov/20 19:12
            Start Date: 17/Nov/20 19:12
    Worklog Time Spent: 10m 
      Work Description: ajamato commented on a change in pull request #13217:
URL: https://github.com/apache/beam/pull/13217#discussion_r525388280



##########
File path: sdks/python/apache_beam/internal/metrics/metric.py
##########
@@ -136,3 +165,64 @@ def log_metrics(self, reset_after_logging=False):
           self._last_logging_millis = current_millis
       finally:
         self._lock.release()
+
+
+class ServiceCallMetric(object):
+  """Metric class which records Service API call metrics.
+
+  This class will capture a request count metric for the specified
+  request_count_urn and base_labels.
+
+  When call() is invoked the status must be provided, which will
+  be converted to a canonical GCP status code, if possible.
+
+  TODO(ajamato): Add Request latency metric.
+  """
+  def __init__(self, request_count_urn, base_labels=None):
+    # type: (str, Optional[Dict[str, str]]) -> None

Review comment:
       ack

##########
File path: sdks/python/apache_beam/io/gcp/bigquery_tools.py
##########
@@ -566,10 +568,30 @@ def _insert_all_rows(
             skipInvalidRows=skip_invalid_rows,
             # TODO(silviuc): Should have an option for ignoreUnknownValues?
             rows=rows))
+
+    resource = '//bigquery.googleapis.com/projects/%s/datasets/%s/tables/%s' % 
(
+        project_id, dataset_id, table_id)
+
+    labels = {
+        # TODO(ajamato): Add Ptransform label.
+        monitoring_infos.SERVICE_LABEL: 'BigQuery',
+        monitoring_infos.METHOD_LABEL: 'BigQueryBatchWrite',

Review comment:
       Added a clarifying comment. I don't want this to be that specific. I.e. 
if new APIs are introduced, which do the same thing from a dataflow pipeline's 
perspective "Write batches of elements to BigQuery"

##########
File path: sdks/python/apache_beam/io/gcp/bigquery_tools.py
##########
@@ -566,10 +568,30 @@ def _insert_all_rows(
             skipInvalidRows=skip_invalid_rows,
             # TODO(silviuc): Should have an option for ignoreUnknownValues?
             rows=rows))
+
+    resource = '//bigquery.googleapis.com/projects/%s/datasets/%s/tables/%s' % 
(
+        project_id, dataset_id, table_id)

Review comment:
       done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 513113)
    Time Spent: 2h 40m  (was: 2.5h)

> Add support for GCP IO metrics
> ------------------------------
>
>                 Key: BEAM-11092
>                 URL: https://issues.apache.org/jira/browse/BEAM-11092
>             Project: Beam
>          Issue Type: Test
>          Components: beam-model
>            Reporter: Alex Amato
>            Assignee: Alex Amato
>            Priority: P2
>          Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This involves, collecting new metrics from the IO libraries. And adding 
> "process wide" "harness metric" APIs. Metrics which are not bound to a bundle.
> [_https://s.apache.org/beam-gcp-debuggability_]
> In addition to adding a histogram style metrics.
> [https://s.apache.org/beam-histogram-metrics]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to