See
<https://ci-beam.apache.org/job/beam_LoadTests_Java_GBK_Dataflow_V2_Streaming_Java11/282/display/redirect>
Changes:
------------------------------------------
[...truncated 117.90 KB...]
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
generic::internal: The work item requesting state read is no longer valid on
the backend. The work has already completed or will be retried. This is
expected during autoscaling events.
passed through:
==>
dist_proc/windmill/client/streaming_rpc_client.cc:697
==>
dist_proc/dax/workflow/****/streaming/merge_windows_fn.cc:222
==>
dist_proc/dax/workflow/****/streaming/fnapi_streaming_operators.cc:439
Mar 28, 2022 2:48:18 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:48:17.429Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 2:51:30 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:51:30.689Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 2:52:34 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:52:33.818Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 2:54:39 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:54:37.076Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 2:56:37 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:56:36.342Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 2:59:32 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T14:59:31.142Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:02:43 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:02:41.679Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:03:37 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:03:35.934Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:07:46 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:07:45.650Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:09:00 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:08:58.821Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:11:48 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:11:48.816Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:13:54 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:13:52.121Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:17:52 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:17:51.709Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:18:54 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:18:54.607Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:19:57 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:19:57.366Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:23:04 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:23:02.434Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:25:02 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:25:01.365Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:26:05 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:26:04.182Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:28:09 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:28:07.015Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:29:10 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:29:10.010Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:35:12 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:35:10.911Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:38:06 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:38:05.389Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:41:15 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:41:14.976Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:44:21 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:44:19.640Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:47:19 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:47:18.174Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:52:12 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:52:11.485Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:53:25 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:53:24.708Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:58:27 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:58:25.463Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 3:59:28 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T15:59:28.609Z: Autoscaling: Raised the number of ****s to 5 so
that the pipeline can catch up with its backlog and keep up with its input rate.
Mar 28, 2022 4:00:38 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:00:37.806Z: Cancel request is committed for workflow job:
2022-03-28_05_19_47-12780970051282298737.
Mar 28, 2022 4:00:38 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:00:37.843Z: Cleaning up.
Mar 28, 2022 4:00:38 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:00:37.936Z: Stopping **** pool...
Mar 28, 2022 4:00:38 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:00:38.010Z: Stopping **** pool...
Mar 28, 2022 4:03:24 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:03:23.833Z: Autoscaling: Reduced the number of ****s to 0
based on low average **** CPU utilization, and the pipeline having sufficiently
low backlog and keeping up with input rate.
Mar 28, 2022 4:03:24 PM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-03-28T16:03:23.863Z: Worker pool stopped.
Mar 28, 2022 4:03:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob
logTerminalState
INFO: Job 2022-03-28_05_19_47-12780970051282298737 finished with status
CANCELLED.
Load test results for test (ID): 7a0dc7c8-d225-4467-8a11-8b1258843273 and
timestamp: 2022-03-28T12:19:42.112000000Z:
Metric: Value:
dataflow_v2_java11_runtime_sec 6675.982
dataflow_v2_java11_total_bytes_count 4.49950162E9
Exception in thread "main" java.lang.RuntimeException: Invalid job state:
CANCELLED.
at
org.apache.beam.sdk.loadtests.JobFailure.handleFailure(JobFailure.java:51)
at org.apache.beam.sdk.loadtests.LoadTest.run(LoadTest.java:139)
at
org.apache.beam.sdk.loadtests.GroupByKeyLoadTest.run(GroupByKeyLoadTest.java:57)
at
org.apache.beam.sdk.loadtests.GroupByKeyLoadTest.main(GroupByKeyLoadTest.java:131)
> Task :sdks:java:testing:load-tests:run FAILED
> Task :runners:google-cloud-dataflow-java:cleanUpDockerJavaImages
Untagged: us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220328121720
Untagged:
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd
Tag: [us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220328121720]
- referencing digest:
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd]
Deleted [[us.gcr.io/apache-beam-testing/java-postcommit-it/java:20220328121720]
(referencing
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd])].
Removing untagged image
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd
Digests:
-
us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd
Deleted
[us.gcr.io/apache-beam-testing/java-postcommit-it/java@sha256:47ea363a771ca75344b2b2a71ef3f1ee358a1a17184d34b1ddc73bbf55a0a9fd].
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':sdks:java:testing:load-tests:run'.
> Process 'command '/usr/lib/jvm/java-8-openjdk-amd64/bin/java'' finished with
> non-zero exit value 1
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings
and determine if they come from your own scripts or plugins.
See
https://docs.gradle.org/7.3.2/userguide/command_line_interface.html#sec:command_line_warnings
Execution optimizations have been disabled for 1 invalid unit(s) of work during
this build to ensure correctness.
Please consult deprecation warnings for more details.
BUILD FAILED in 3h 46m 33s
109 actionable tasks: 72 executed, 33 from cache, 4 up-to-date
Publishing build scan...
https://gradle.com/s/r3pekovdeykag
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]