See
<https://ci-beam.apache.org/job/beam_LoadTests_Java_GBK_Dataflow_V2_Streaming_Java17/88/display/redirect?page=changes>
Changes:
[Luke Cwik] [BEAM-10212] Clean-up comments, remove rawtypes usage.
[noreply] [BEAM-11934] Add enable_file_dynamic_sharding to allow DataflowRunner
[noreply] [BEAM-12777] Create symlink for `current` directory (#17105)
[noreply] [BEAM-14020] Adding SchemaTransform, SchemaTransformProvider,
[noreply] [BEAM-13015] Modify metrics to begin and reset to a non-dirty state.
[noreply] [BEAM-14112] Avoid storing a generator in _CustomBigQuerySource
(#17100)
[noreply] Populate environment capabilities in v1beta3 protos. (#17042)
[Kyle Weaver] [BEAM-12976] Test a whole pipeline using projection pushdown in
BQ IO.
[Kyle Weaver] [BEAM-12976] Enable projection pushdown for Java pipelines on
Dataflow,
[noreply] [BEAM-14038] Auto-startup for Python expansion service. (#17035)
[Kyle Weaver] [BEAM-14123] Fix typo in hdfsIntegrationTest task name.
[noreply] [BEAM-13893] improved coverage of jobopts package (#17003)
[noreply] Merge pull request #16977 from [BEAM-12164] Added integration test
for
------------------------------------------
[...truncated 280.66 KB...]
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
Mar 18, 2022 3:00:07 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:00:06.399Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:04:07 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:04:06.111Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:06:09 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:06:09.632Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:13:12 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:13:11.650Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:14:15 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:14:14.315Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:15:15 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:15:14.443Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:21:21 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:21:18.869Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:24:22 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:24:22.018Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:30:25 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:30:24.178Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:32:29 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:32:29.022Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:33:30 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:33:28.373Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:37:23 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:37:22.385Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:38:35 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:38:34.584Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:40:39 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:40:38.363Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:41:42 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:41:41.853Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:45:44 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:45:42.577Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:47:46 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:47:45.866Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:50:47 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:50:46.949Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:52:51 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:52:51.722Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:58:52 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:58:51.130Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 3:59:55 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T15:59:54.551Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 18, 2022 4:00:26 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:00:25.850Z: Cancel request is committed for workflow job:
2022-03-18_05_59_53-15474650030020574128.
Mar 18, 2022 4:00:26 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:00:25.886Z: Cleaning up.
Mar 18, 2022 4:00:26 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:00:25.968Z: Stopping **** pool...
Mar 18, 2022 4:00:26 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:00:26.044Z: Stopping **** pool...
Mar 18, 2022 4:03:16 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:03:16.262Z: Autoscaling: Reduced the number of ****s to 0
based on low average **** CPU utilization, and the pipeline having sufficiently
low backlog and keeping up with input rate.
Mar 18, 2022 4:03:16 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-18T16:03:16.331Z: Worker pool stopped.
Mar 18, 2022 4:03:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob
logTerminalState
INFO: Job 2022-03-18_05_59_53-15474650030020574128 finished with status
CANCELLED.
Load test results for test (ID): 2a41c95e-4da7-437e-a2ef-1ec706046d5a and
timestamp: 2022-03-18T12:59:46.581000000Z:
Metric: Value:
dataflow_v2_java17_runtime_sec 9672.488
dataflow_v2_java17_total_bytes_count 4.43796322E9
Exception in thread "main" java.lang.RuntimeException: Invalid job state:
CANCELLED.
at
org.apache.beam.sdk.loadtests.JobFailure.handleFailure(JobFailure.java:51)
at org.apache.beam.sdk.loadtests.LoadTest.run(LoadTest.java:139)
at
org.apache.beam.sdk.loadtests.GroupByKeyLoadTest.run(GroupByKeyLoadTest.java:57)
at
org.apache.beam.sdk.loadtests.GroupByKeyLoadTest.main(GroupByKeyLoadTest.java:131)
> Task :sdks:java:testing:load-tests:run FAILED
> Task :runners:google-cloud-dataflow-java:cleanUpDockerJavaImages
Untagged: us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220318125740
Untagged:
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5
Tag: [us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220318125740]
- referencing digest:
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5]
Deleted [[us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220318125740]
(referencing
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5])].
Removing untagged image
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5
Digests:
-
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5
Deleted
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:381df7627f67811583b68d338f48b25b8f4b4b8cf3e17c3473aff098bddd27c5].
Removing untagged image
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:285da2d04e1dba971666500a8615417be3837c780c160035b2a5e3f9be5365a7
Digests:
-
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:285da2d04e1dba971666500a8615417be3837c780c160035b2a5e3f9be5365a7
Deleted
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:285da2d04e1dba971666500a8615417be3837c780c160035b2a5e3f9be5365a7].
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':sdks:java:testing:load-tests:run'.
> Process 'command '/usr/lib/jvm/java-8-openjdk-amd64/bin/java'' finished with
> non-zero exit value 1
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings
and determine if they come from your own scripts or plugins.
See
https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings
Execution optimizations have been disabled for 1 invalid unit(s) of work during
this build to ensure correctness.
Please consult deprecation warnings for more details.
BUILD FAILED in 3h 6m 12s
109 actionable tasks: 72 executed, 33 from cache, 4 up-to-date
Publishing build scan...
https://gradle.com/s/2drj4htj5ezko
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]