See
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/5023/display/redirect>
Changes:
------------------------------------------
[...truncated 222.29 KB...]
# System metrics
MetricResultMatcher(
name='ElementCount',
labels={
"output_user_name": "generate_metrics-out0",
"original_name": "generate_metrics-out0-ElementCount"
},
attempted=len(MESSAGES_TO_PUBLISH),
committed=len(MESSAGES_TO_PUBLISH),
),
# User Counter Metrics.
MetricResultMatcher(
name='double_msg_counter_name',
namespace=METRIC_NAMESPACE,
step='generate_metrics',
attempted=len(MESSAGES_TO_PUBLISH) * 2,
committed=len(MESSAGES_TO_PUBLISH) * 2),
MetricResultMatcher(
name='msg_len_dist_metric_name',
namespace=METRIC_NAMESPACE,
step='generate_metrics',
attempted=DistributionMatcher(
sum_value=len(''.join(MESSAGES_TO_PUBLISH)),
count_value=len(MESSAGES_TO_PUBLISH),
min_value=len(MESSAGES_TO_PUBLISH[0]),
max_value=len(MESSAGES_TO_PUBLISH[1])),
committed=DistributionMatcher(
sum_value=len(''.join(MESSAGES_TO_PUBLISH)),
count_value=len(MESSAGES_TO_PUBLISH),
min_value=len(MESSAGES_TO_PUBLISH[0]),
max_value=len(MESSAGES_TO_PUBLISH[1]))),
]
metrics = result.metrics().all_metrics()
errors = metric_result_matchers.verify_all(metrics, matchers)
> self.assertFalse(errors, str(errors))
[1m[31mE AssertionError: "Unable to match metrics for matcher name:
'ElementCount' (label_key: 'output_user_name' label_value:
'generate_metrics-out0'). (label_key: 'original_name' label_value:
'generate_metrics-out0-ElementCount'). attempted: <3> committed: <3>Unable to
match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'double_msg_counter_name' step: 'generate_metrics' attempted: <6>
committed: <6>Unable to match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'msg_len_dist_metric_name' step: 'generate_metrics' attempted:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11> committed:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11>\nActual
MetricResults:\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalVcpuTime),
labels={'original_name': 'Service-cpu_num_seconds'}), committed=718,
attempted=718)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalMemoryUsage),
labels={'original_name': 'Service-mem_mb_seconds'}), committed=2943550,
attempted=2943550)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalPdUsage),
labels={'original_name': 'Service-pd_gb_seconds'}), committed=10779,
attempted=10779)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb_seconds'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_actual_gb'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BillableShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_chargeable_gb'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalStreamingDataProcessed),
labels={'original_name': 'Service-streaming_service_gb'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalDcuUsage),
labels={'original_name': 'ServiceExperiment-micro-dcu-billing-v15'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalGpuTime),
labels={'original_name': 'Service-gpu_num_seconds'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSeCuUsage),
labels={'original_name': 'Service-streaming_engine_compute_unit'}),
committed=None, attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BacklogBytes),
labels={'execution_step': 'F0', 'original_name': 'dataflow_backlog_size-F0'}),
committed=11, attempted=11)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemWatermark),
labels={'execution_step': 'F0'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentVcpuCount),
labels={'original_name': 'Service-cpu_num'}), committed=2,
attempted=2)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentMemoryUsage),
labels={'original_name': 'Service-mem_mb'}), committed=8192,
attempted=8192)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentPdUsage),
labels={'original_name': 'Service-pd_gb'}), committed=30,
attempted=30)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentGpuCount),
labels={'original_name': 'Service-gpu_num'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataWatermark),
labels={'execution_step': 'F0'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataLag),
labels={'execution_step': 'F0'}), committed=407364098,
attempted=407364098)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemLag),
labels={'execution_step': 'F0'}), committed=1364098,
attempted=1364098)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-ElementCount'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-MeanByteCount'}), committed=22,
attempted=22)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': '', 'original_name': '-ElementCount'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': '', 'original_name': '-MeanByteCount'}),
committed=0, attempted=0)" is not false : Unable to match metrics for matcher
name: 'ElementCount' (label_key: 'output_user_name' label_value:
'generate_metrics-out0'). (label_key: 'original_name' label_value:
'generate_metrics-out0-ElementCount'). attempted: <3> committed: <3>Unable to
match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'double_msg_counter_name' step: 'generate_metrics' attempted: <6>
committed: <6>Unable to match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'msg_len_dist_metric_name' step: 'generate_metrics' attempted:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11> committed:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11>[0m
[1m[31mE Actual MetricResults:[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalVcpuTime),
labels={'original_name': 'Service-cpu_num_seconds'}), committed=718,
attempted=718)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalMemoryUsage),
labels={'original_name': 'Service-mem_mb_seconds'}), committed=2943550,
attempted=2943550)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalPdUsage),
labels={'original_name': 'Service-pd_gb_seconds'}), committed=10779,
attempted=10779)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb_seconds'}), committed=0,
attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_actual_gb'}), committed=0,
attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BillableShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_chargeable_gb'}),
committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalStreamingDataProcessed),
labels={'original_name': 'Service-streaming_service_gb'}), committed=None,
attempted=None)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalDcuUsage),
labels={'original_name': 'ServiceExperiment-micro-dcu-billing-v15'}),
committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalGpuTime),
labels={'original_name': 'Service-gpu_num_seconds'}), committed=0,
attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSeCuUsage),
labels={'original_name': 'Service-streaming_engine_compute_unit'}),
committed=None, attempted=None)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BacklogBytes),
labels={'execution_step': 'F0', 'original_name': 'dataflow_backlog_size-F0'}),
committed=11, attempted=11)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemWatermark),
labels={'execution_step': 'F0'}), committed=None, attempted=None)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentVcpuCount),
labels={'original_name': 'Service-cpu_num'}), committed=2, attempted=2)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentMemoryUsage),
labels={'original_name': 'Service-mem_mb'}), committed=8192, attempted=8192)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentPdUsage),
labels={'original_name': 'Service-pd_gb'}), committed=30, attempted=30)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb'}), committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentGpuCount),
labels={'original_name': 'Service-gpu_num'}), committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataWatermark),
labels={'execution_step': 'F0'}), committed=None, attempted=None)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataLag),
labels={'execution_step': 'F0'}), committed=407364098, attempted=407364098)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemLag),
labels={'execution_step': 'F0'}), committed=1364098, attempted=1364098)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-ElementCount'}), committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-MeanByteCount'}), committed=22, attempted=22)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': '', 'original_name': '-ElementCount'}),
committed=0, attempted=0)[0m
[1m[31mE MetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': '', 'original_name': '-MeanByteCount'}),
committed=0, attempted=0)[0m
[1m[31mapache_beam/runners/dataflow/dataflow_exercise_streaming_metrics_pipeline_test.py[0m:168:
AssertionError
------------------------------ Captured log call -------------------------------
[32mINFO [0m apache_beam.runners.portability.stager:stager.py:772
Executing command:
['<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/bin/python3.7',>
'-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r',
'/tmp/tmp5i_s512f/tmp_requirements.txt', '--exists-action', 'i', '--no-deps',
'--implementation', 'cp', '--abi', 'cp37m', '--platform',
'manylinux2014_x86_64']
[32mINFO [0m apache_beam.runners.portability.stager:stager.py:330 Copying
Beam SDK
"<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/sdks/python/build/apache_beam-2.49.0.dev0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl">
to staging location.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:454 Pipeline
has additional dependencies to be installed in SDK worker container, consider
using the SDK container image pre-building workflow to avoid repetitive
installations. Learn more on
https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild
[32mINFO [0m root:environments.py:296 Using provided Python SDK container
image: gcr.io/cloud-dataflow/v1beta3/beam_python3.7_sdk:beam-master-20230422
[32mINFO [0m root:environments.py:304 Python SDK container image set to
"gcr.io/cloud-dataflow/v1beta3/beam_python3.7_sdk:beam-master-20230422" for
Docker environment
[32mINFO [0m apache_beam.internal.gcp.auth:auth.py:130 Setting socket
default timeout to 60 seconds.
[32mINFO [0m apache_beam.internal.gcp.auth:auth.py:133 socket default
timeout is 60.0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/requirements.txt...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/requirements.txt
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/pickled_main_session...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/pickled_main_session
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/mock-2.0.0-py2.py3-none-any.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/mock-2.0.0-py2.py3-none-any.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/seaborn-0.12.2-py3-none-any.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/seaborn-0.12.2-py3-none-any.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/PyHamcrest-1.10.1-py3-none-any.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/PyHamcrest-1.10.1-py3-none-any.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/parameterized-0.7.5-py2.py3-none-any.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/parameterized-0.7.5-py2.py3-none-any.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/tensorflow_transform-1.13.0-py3-none-any.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/tensorflow_transform-1.13.0-py3-none-any.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/tfx_bsl-1.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/tfx_bsl-1.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
in 1 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/matplotlib-3.7.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/matplotlib-3.7.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
in 1 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/apache_beam-2.49.0.dev0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/apache_beam-2.49.0.dev0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:734 Starting GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/pipeline.pb...
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:753 Completed GCS
upload to
gs://temp-storage-for-end-to-end-tests/staging-it/beamapp-jenkins-0617102746-802865-nthvafm2.1686997666.803050/pipeline.pb
in 0 seconds.
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:909 Create job:
<Job
clientRequestId: '20230617102746804171-2535'
createTime: '2023-06-17T10:27:53.959996Z'
currentStateTime: '1970-01-01T00:00:00Z'
id: '2023-06-17_03_27_53-11806918371062145973'
location: 'us-central1'
name: 'beamapp-jenkins-0617102746-802865-nthvafm2'
projectId: 'apache-beam-testing'
stageStates: []
startTime: '2023-06-17T10:27:53.959996Z'
steps: []
tempFiles: []
type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:911 Created job
with id: [2023-06-17_03_27_53-11806918371062145973]
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:912 Submitted job:
2023-06-17_03_27_53-11806918371062145973
[32mINFO [0m
apache_beam.runners.dataflow.internal.apiclient:apiclient.py:918 To access the
Dataflow monitoring console, please navigate to
https://console.cloud.google.com/dataflow/jobs/us-central1/2023-06-17_03_27_53-11806918371062145973?project=apache-beam-testing
[32mINFO [0m
apache_beam.runners.dataflow.test_dataflow_runner:test_dataflow_runner.py:58
Console log:
[32mINFO [0m
apache_beam.runners.dataflow.test_dataflow_runner:test_dataflow_runner.py:59
https://console.cloud.google.com/dataflow/jobs/us-central1/2023-06-17_03_27_53-11806918371062145973?project=apache-beam-testing
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:193 Job
2023-06-17_03_27_53-11806918371062145973 is in state JOB_STATE_RUNNING
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:27:55.196Z: JOB_MESSAGE_WARNING: Autoscaling is enabled for
Dataflow Streaming Engine. Workers will scale between 1 and 100 unless
maxNumWorkers is specified.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:27:55.310Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job
2023-06-17_03_27_53-11806918371062145973. The number of workers will be between
1 and 100.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:27:55.336Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically
enabled for job 2023-06-17_03_27_53-11806918371062145973.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:27:59.272Z: JOB_MESSAGE_BASIC: Worker configuration:
e2-standard-2 in us-central1-a.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.044Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo
operations into optimizable parts.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.079Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton
operations into optimizable parts.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.160Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey
operations into optimizable parts.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.195Z: JOB_MESSAGE_DETAILED: Expanding
SplittableProcessKeyed operations into optimizable parts.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.220Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations
into streaming Read/Write steps
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.252Z: JOB_MESSAGE_DETAILED: Lifting
ValueCombiningMappingFns into MergeBucketsMappingFns
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.286Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner
information.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.328Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read,
Write, and Flatten operations
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.368Z: JOB_MESSAGE_DETAILED: Fusing consumer
generate_metrics into ReadFromPubSub/Read
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.398Z: JOB_MESSAGE_DETAILED: Fusing consumer
dump_to_pub/ToProtobuf into generate_metrics
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.430Z: JOB_MESSAGE_DETAILED: Fusing consumer
dump_to_pub/Write into dump_to_pub/ToProtobuf
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.470Z: JOB_MESSAGE_DEBUG: Workflow config is missing a
default resource spec.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.500Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and
teardown to workflow graph.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.532Z: JOB_MESSAGE_BASIC: Running job using Streaming Engine
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.564Z: JOB_MESSAGE_BASIC: Using cloud KMLS key to protect
persistent state.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.655Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop
steps.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.675Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.911Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:02.945Z: JOB_MESSAGE_BASIC: Starting 1 workers in
us-central1-a...
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:03.008Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:33.014Z: JOB_MESSAGE_BASIC: Your project already contains 100
Dataflow-created metric descriptors, so new user metrics of the form
custom.googleapis.com/* will not be created. However, all user metrics are also
available in the metric dataflow.googleapis.com/job/user_counter. If you rely
on the custom metrics, you can delete old / unused metric descriptors. See
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
and
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:28:46.067Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number
of workers to 1 so that the pipeline can catch up with its backlog and keep up
with its input rate.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:31:48.934Z: JOB_MESSAGE_BASIC: Worker configuration:
e2-standard-2 in us-central1-a.
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:32:07.873Z: JOB_MESSAGE_BASIC: Your project already contains 100
Dataflow-created metric descriptors, so new user metrics of the form
custom.googleapis.com/* will not be created. However, all user metrics are also
available in the metric dataflow.googleapis.com/job/user_counter. If you rely
on the custom metrics, you can delete old / unused metric descriptors. See
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
and
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
[32mINFO [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:238
2023-06-17T10:32:19.350Z: JOB_MESSAGE_DETAILED: Workers have started
successfully.
[33mWARNING [0m
apache_beam.runners.dataflow.dataflow_runner:dataflow_runner.py:252 Timing out
on waiting for job 2023-06-17_03_27_53-11806918371062145973 after 303 seconds
[33m=============================== warnings summary
===============================[0m
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/hdfs/config.py:15
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/hdfs/config.py>:15:
DeprecationWarning: the imp module is deprecated in favour of importlib; see
the module's documentation for alternative uses
from imp import load_source
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:121
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:121:
DeprecationWarning: pkg_resources is deprecated as an API
warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2870:
18 warnings
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:2870:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2870:
13 warnings
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:2870:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google.cloud')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2349
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2349
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2349
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2349
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:2349:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(parent)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2870
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:2870:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google.logging')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py:2870
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/pkg_resources/__init__.py>:2870:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google.iam')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../../build/gradleenv/-1734967052/lib/python3.7/site-packages/google/rpc/__init__.py:20
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/build/gradleenv/-1734967052/lib/python3.7/site-packages/google/rpc/__init__.py>:20:
DeprecationWarning: Deprecated call to
`pkg_resources.declare_namespace('google.rpc')`.
Implementing implicit namespace packages (as specified in PEP 420) is
preferred to `pkg_resources.declare_namespace`. See
https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
pkg_resources.declare_namespace(__name__)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- generated xml file:
<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/sdks/python/pytest_validatesRunnerStreamingTests-df-py37-noxdist.xml>
-
[36m[1m=========================== short test summary info
============================[0m
[31mFAILED[0m
apache_beam/runners/dataflow/dataflow_exercise_streaming_metrics_pipeline_test.py::[1mExerciseStreamingMetricsPipelineTest::test_streaming_pipeline_returns_expected_user_metrics_fnapi_it[0m
- AssertionError: "Unable to match metrics for matcher name: 'ElementCount'
(label_key: 'output_user_name' label_value: 'generate_metrics-out0').
(label_key: 'original_name' label_value: 'generate_metrics-out0-ElementCount').
attempted: <3> committed: <3>Unable to match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'double_msg_counter_name' step: 'generate_metrics' attempted: <6>
committed: <6>Unable to match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'msg_len_dist_metric_name' step: 'generate_metrics' attempted:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11> committed:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11>\nActual
MetricResults:\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalVcpuTime),
labels={'original_name': 'Service-cpu_num_seconds'}), committed=718,
attempted=718)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalMemoryUsage),
labels={'original_name': 'Service-mem_mb_seconds'}), committed=2943550,
attempted=2943550)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalPdUsage),
labels={'original_name': 'Service-pd_gb_seconds'}), committed=10779,
attempted=10779)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb_seconds'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_actual_gb'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BillableShuffleDataProcessed),
labels={'original_name': 'Service-shuffle_service_chargeable_gb'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalStreamingDataProcessed),
labels={'original_name': 'Service-streaming_service_gb'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalDcuUsage),
labels={'original_name': 'ServiceExperiment-micro-dcu-billing-v15'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalGpuTime),
labels={'original_name': 'Service-gpu_num_seconds'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=TotalSeCuUsage),
labels={'original_name': 'Service-streaming_engine_compute_unit'}),
committed=None, attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=BacklogBytes),
labels={'execution_step': 'F0', 'original_name': 'dataflow_backlog_size-F0'}),
committed=11, attempted=11)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemWatermark),
labels={'execution_step': 'F0'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentVcpuCount),
labels={'original_name': 'Service-cpu_num'}), committed=2,
attempted=2)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentMemoryUsage),
labels={'original_name': 'Service-mem_mb'}), committed=8192,
attempted=8192)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentPdUsage),
labels={'original_name': 'Service-pd_gb'}), committed=30,
attempted=30)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentSsdUsage),
labels={'original_name': 'Service-pd_ssd_gb'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=CurrentGpuCount),
labels={'original_name': 'Service-gpu_num'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataWatermark),
labels={'execution_step': 'F0'}), committed=None,
attempted=None)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=DataLag),
labels={'execution_step': 'F0'}), committed=407364098,
attempted=407364098)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=SystemLag),
labels={'execution_step': 'F0'}), committed=1364098,
attempted=1364098)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-ElementCount'}), committed=0,
attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': 'ReadFromPubSub/Read-out0', 'original_name':
'ReadFromPubSub/Read-out0-MeanByteCount'}), committed=22,
attempted=22)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=ElementCount),
labels={'output_user_name': '', 'original_name': '-ElementCount'}),
committed=0, attempted=0)\nMetricResult(key=MetricKey(step=,
metric=MetricName(namespace=dataflow/v1b3, name=MeanByteCount),
labels={'output_user_name': '', 'original_name': '-MeanByteCount'}),
committed=0, attempted=0)" is not false : Unable to match metrics for matcher
name: 'ElementCount' (label_key: 'output_user_name' label_value:
'generate_metrics-out0'). (label_key: 'original_name' label_value:
'generate_metrics-out0-ElementCount'). attempted: <3> committed: <3>Unable to
match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'double_msg_counter_name' step: 'generate_metrics' attempted: <6>
committed: <6>Unable to match metrics for matcher namespace:
'apache_beam.runners.dataflow.dataflow_exercise_streaming_metrics_pipeline.StreamingUserMetricsDoFn'
name: 'msg_len_dist_metric_name' step: 'generate_metrics' attempted:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11> committed:
sum_value: <29> count_value: <3> min_value: <9> max_value: <11>
Actual MetricResults:
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalVcpuTime), labels={'original_name': 'Service-cpu_num_seconds'}),
committed=718, attempted=718)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalMemoryUsage), labels={'original_name': 'Service-mem_mb_seconds'}),
committed=2943550, attempted=2943550)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalPdUsage), labels={'original_name': 'Service-pd_gb_seconds'}),
committed=10779, attempted=10779)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalSsdUsage), labels={'original_name': 'Service-pd_ssd_gb_seconds'}),
committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalShuffleDataProcessed), labels={'original_name':
'Service-shuffle_service_actual_gb'}), committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=BillableShuffleDataProcessed), labels={'original_name':
'Service-shuffle_service_chargeable_gb'}), committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalStreamingDataProcessed), labels={'original_name':
'Service-streaming_service_gb'}), committed=None, attempted=None)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalDcuUsage), labels={'original_name':
'ServiceExperiment-micro-dcu-billing-v15'}), committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalGpuTime), labels={'original_name': 'Service-gpu_num_seconds'}),
committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=TotalSeCuUsage), labels={'original_name':
'Service-streaming_engine_compute_unit'}), committed=None, attempted=None)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=BacklogBytes), labels={'execution_step': 'F0', 'original_name':
'dataflow_backlog_size-F0'}), committed=11, attempted=11)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=SystemWatermark), labels={'execution_step': 'F0'}), committed=None,
attempted=None)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=CurrentVcpuCount), labels={'original_name': 'Service-cpu_num'}),
committed=2, attempted=2)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=CurrentMemoryUsage), labels={'original_name': 'Service-mem_mb'}),
committed=8192, attempted=8192)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=CurrentPdUsage), labels={'original_name': 'Service-pd_gb'}), committed=30,
attempted=30)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=CurrentSsdUsage), labels={'original_name': 'Service-pd_ssd_gb'}),
committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=CurrentGpuCount), labels={'original_name': 'Service-gpu_num'}),
committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=DataWatermark), labels={'execution_step': 'F0'}), committed=None,
attempted=None)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=DataLag), labels={'execution_step': 'F0'}), committed=407364098,
attempted=407364098)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=SystemLag), labels={'execution_step': 'F0'}), committed=1364098,
attempted=1364098)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=ElementCount), labels={'output_user_name': 'ReadFromPubSub/Read-out0',
'original_name': 'ReadFromPubSub/Read-out0-ElementCount'}), committed=0,
attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=MeanByteCount), labels={'output_user_name': 'ReadFromPubSub/Read-out0',
'original_name': 'ReadFromPubSub/Read-out0-MeanByteCount'}), committed=22,
attempted=22)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=ElementCount), labels={'output_user_name': '', 'original_name':
'-ElementCount'}), committed=0, attempted=0)
MetricResult(key=MetricKey(step=, metric=MetricName(namespace=dataflow/v1b3,
name=MeanByteCount), labels={'output_user_name': '', 'original_name':
'-MeanByteCount'}), committed=0, attempted=0)
[31m==== [31m[1m1 failed[0m, [33m9 skipped[0m, [33m6954 deselected[0m,
[33m40 warnings[0m[31m in 564.38s (0:09:24)[0m[31m ====[0m
> Task :sdks:python:test-suites:dataflow:py37:validatesRunnerStreamingTests
> FAILED
FAILURE: Build failed with an exception.
* Where:
Script
'<https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow_V2/ws/src/sdks/python/test-suites/dataflow/common.gradle'>
line: 283
* What went wrong:
Execution failed for task
':sdks:python:test-suites:dataflow:py37:validatesRunnerStreamingTests'.
> Process 'command 'sh'' finished with non-zero exit value 1
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings
and determine if they come from your own scripts or plugins.
See
https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 1h 28m 35s
24 actionable tasks: 18 executed, 4 from cache, 2 up-to-date
Publishing build scan...
https://ge.apache.org/s/hncbwhz6y3tea
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]