tvalentyn commented on code in PR #25242:
URL: https://github.com/apache/beam/pull/25242#discussion_r1096322490


##########
sdks/python/apache_beam/testing/benchmarks/cloudml/cloudml_benchmark_test.py:
##########
@@ -32,6 +35,27 @@
 _OUTPUT_GCS_BUCKET_ROOT = 'gs://temp-storage-for-end-to-end-tests/tft/'
 
 
+def _publish_metrics(pipeline, metric_value, metrics_table):
+  influx_options = InfluxDBMetricsPublisherOptions(
+      metrics_table,
+      pipeline.get_option('influx_db_name'),
+      pipeline.get_option('influx_hostname'),
+      os.getenv('INFLUXDB_USER'),
+      os.getenv('INFLUXDB_USER_PASSWORD'),
+  )
+  metric_reader = MetricsReader(
+      project_name=pipeline.get_option('project'),
+      bq_table=metrics_table,
+      bq_dataset=pipeline.get_option('metrics_dataset'),
+      publish_to_bq=True,
+      influxdb_options=influx_options,
+  )
+  metric_reader.publish_values([(
+      'runtime',
+      metric_value,

Review Comment:
   1) 
   it would help to include the units, such as: runtime_sec in metric name.
   2) do we publish to influx db just to have grafana dashboards?
   3) should this helper be defined externally to the test? Is it something we 
can reuse across multiple benchmark files (parameterized by some test id?)
   4) looks like in only publishes runtime, let's call it `publish_runtime()`. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to