See
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/3410/display/redirect?page=changes>
Changes:
[relax] DLQ for BQ Storage Api writes
[noreply] Bump google.golang.org/api from 0.76.0 to 0.81.0 in /sdks
[noreply] [BEAM-14519] Add website page for Go dependencies (#17766)
[noreply] [BEAM-11106] Validate that DoFn returns Process continuation when
[noreply] [BEAM-14505] Add Dataflow streaming pipeline update support to the Go
------------------------------------------
[...truncated 353.15 KB...]
Successfully started process 'Gradle Test Executor 4'
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
STANDARD_ERROR
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.40.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
SLF4J: Found binding in
[jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDirectReadMethod STANDARD_ERROR
May 27, 2022 2:50:33 AM org.apache.beam.runners.dataflow.DataflowRunner
validateSdkContainerImageOptions
WARNING: Prefer --sdkContainerImage over deprecated legacy option
--workerHarnessContainerImage.
May 27, 2022 2:50:34 AM
org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory
create
INFO: No stagingLocation provided, falling back to gcpTempLocation
May 27, 2022 2:50:35 AM org.apache.beam.runners.dataflow.DataflowRunner
fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files
from the classpath: will stage 386 files. Enable logging at DEBUG level to see
which files will be staged.
May 27, 2022 2:50:38 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DIRECT_READ
May 27, 2022 2:50:38 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:39 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQL:
SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`,
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND
`HACKER_NEWS`.`score` > 2
May 27, 2022 2:50:39 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DIRECT_READ
May 27, 2022 2:50:39 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:39 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQLPlan>
LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
BeamIOSourceRel(table=[[beam, HACKER_NEWS]])
May 27, 2022 2:50:39 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: BEAMPlan>
BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR],
expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)],
expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)],
expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5],
$condition=[$t21])
BeamIOSourceRel(table=[[beam, HACKER_NEWS]])
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDirectReadMethod FAILED
java.lang.IllegalStateException: Unable to return a default Coder for
BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output
[PCollection@1749712938]. Correct one of the following root causes:
No Coder has been manually specified; you may do so using .setCoder().
Inferring a Coder from the CoderRegistry failed: Cannot provide a coder
for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
Using the default output Coder from the producing PTransform failed:
PTransform.getOutputCoder called.
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
at
org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
at
org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
at
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
at
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
at
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:150)
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDefaultMethod STANDARD_ERROR
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DEFAULT
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQL:
SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`,
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND
`HACKER_NEWS`.`score` > 2
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DEFAULT
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQLPlan>
LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
BeamIOSourceRel(table=[[beam, HACKER_NEWS]])
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: BEAMPlan>
BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR],
expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)],
expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)],
expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5],
$condition=[$t21])
BeamIOSourceRel(table=[[beam, HACKER_NEWS]])
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDefaultMethod FAILED
java.lang.IllegalStateException: Unable to return a default Coder for
BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output
[PCollection@852354068]. Correct one of the following root causes:
No Coder has been manually specified; you may do so using .setCoder().
Inferring a Coder from the CoderRegistry failed: Cannot provide a coder
for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
Using the default output Coder from the producing PTransform failed:
PTransform.getOutputCoder called.
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
at
org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
at
org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
at
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
at
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
at
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:163)
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDirectReadMethodPushDown STANDARD_ERROR
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DIRECT_READ
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQL:
SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`,
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND
`HACKER_NEWS`.`score` > 2
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery method is set to: DIRECT_READ
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
May 27, 2022 2:50:40 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: SQLPlan>
LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
BeamIOSourceRel(table=[[beam, HACKER_NEWS]])
May 27, 2022 2:50:41 AM
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
INFO: BEAMPlan>
BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type,
title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8,
'job'))>($5, 2)}, unsupported{}]])
May 27, 2022 2:50:41 AM
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable
buildIOReader
INFO: Pushing down the following filter: (type = 'story' OR type = 'job')
AND score > 2
May 27, 2022 2:50:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Executing pipeline on the Dataflow Service, which will have billing
implications related to Google Compute Engine usage and other Google Cloud
Services.
May 27, 2022 2:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil
stageClasspathElements
INFO: Uploading 387 files from PipelineOptions.filesToStage to staging
location to prepare for execution.
May 27, 2022 2:50:46 AM
org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes
forFileToStage
INFO: Staging custom dataflow-worker.jar as
beam-runners-google-cloud-dataflow-java-legacy-worker-2.40.0-SNAPSHOT-LgZNp3-kDcUri8NCRLMXe6f9xVI6sY6NyE2DoAOBn40.jar
May 27, 2022 2:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil
tryStagePackage
INFO: Uploading
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/google-cloud-platform/build/libs/beam-sdks-java-io-google-cloud-platform-2.40.0-SNAPSHOT.jar>
to
gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-google-cloud-platform-2.40.0-SNAPSHOT-MeouBpBGg8lzDCPgWp7bQTkQ_1f2BJ7l9Oh2s0YU8K0.jar
May 27, 2022 2:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil
tryStagePackage
INFO: Uploading
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/google-cloud-platform/build/libs/beam-sdks-java-io-google-cloud-platform-2.40.0-SNAPSHOT-tests.jar>
to
gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-google-cloud-platform-2.40.0-SNAPSHOT-tests-Gp7GEbAAQkzLmDAVON3H4JIMi4K0CGJ6lf77ja-oUpU.jar
May 27, 2022 2:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil
tryStagePackage
INFO: Uploading /tmp/test1349268714155389398.zip to
gs://temp-storage-for-perf-tests/loadtests/staging/test-OS4QLNXROsBj9IgrI4dL4QnsTnCig1cZ_pRTzon1K2M.jar
May 27, 2022 2:50:47 AM org.apache.beam.runners.dataflow.util.PackageUtil
stageClasspathElements
INFO: Staging files complete: 384 files cached, 3 files newly uploaded in 0
seconds
May 27, 2022 2:50:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Staging portable pipeline proto to
gs://temp-storage-for-perf-tests/loadtests/staging/
May 27, 2022 2:50:47 AM org.apache.beam.runners.dataflow.util.PackageUtil
tryStagePackage
INFO: Uploading <151495 bytes, hash
d9f333dffc5eeeac336e1ff7fd81554de99e676686ceebc3dece4979420cd480> to
gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2fMz3_xe7qwzbh_3_YFVTemeZ2aGzuvD3s5JeUIM1IA.pb
May 27, 2022 2:50:50 AM
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with
push-down/Read(BigQueryStorageTableSource) as step s1
May 27, 2022 2:50:50 AM
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
May 27, 2022 2:50:50 AM
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding ParDo(TimeMonitor) as step s3
May 27, 2022 2:50:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Dataflow SDK version: 2.40.0-SNAPSHOT
May 27, 2022 2:50:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: To access the Dataflow monitoring console, please navigate to
https://console.cloud.google.com/dataflow/jobs/us-central1/2022-05-26_19_50_50-3676820533624248379?project=apache-beam-testing
May 27, 2022 2:50:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Submitted job: 2022-05-26_19_50_50-3676820533624248379
May 27, 2022 2:50:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: To cancel the job using the 'gcloud' tool, run:
> gcloud dataflow jobs --project=apache-beam-testing cancel
--region=us-central1 2022-05-26_19_50_50-3676820533624248379
May 27, 2022 2:50:54 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
WARNING: 2022-05-27T02:50:52.309Z: The requested max number of workers (5)
is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
May 27, 2022 2:51:04 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:04.059Z: Worker configuration: e2-standard-2 in
us-central1-b.
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:04.806Z: Expanding CoGroupByKey operations into
optimizable parts.
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:04.843Z: Expanding GroupByKey operations into
optimizable parts.
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:04.908Z: Lifting ValueCombiningMappingFns into
MergeBucketsMappingFns
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:04.979Z: Fusing adjacent ParDo, Read, Write, and
Flatten operations
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:05.004Z: Fusing consumer
BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into
BeamPushDownIOSourceRel_272/Read Input BQ Rows with
push-down/Read(BigQueryStorageTableSource)
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:05.029Z: Fusing consumer ParDo(TimeMonitor) into
BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:05.438Z: Executing operation
BeamPushDownIOSourceRel_272/Read Input BQ Rows with
push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
May 27, 2022 2:51:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:05.516Z: Starting 5 workers in us-central1-b...
May 27, 2022 2:51:17 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:15.756Z: Your project already contains 100
Dataflow-created metric descriptors, so new user metrics of the form
custom.googleapis.com/* will not be created. However, all user metrics are also
available in the metric dataflow.googleapis.com/job/user_counter. If you rely
on the custom metrics, you can delete old / unused metric descriptors. See
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
and
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
May 27, 2022 2:51:29 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:28.178Z: Autoscaling: Raised the number of workers
to 4 based on the rate of progress in the currently running stage(s).
May 27, 2022 2:51:29 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:28.202Z: Resized worker pool to 4, though goal was
5. This could be a quota issue.
May 27, 2022 2:51:40 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:51:38.407Z: Autoscaling: Raised the number of workers
to 5 based on the rate of progress in the currently running stage(s).
May 27, 2022 2:52:06 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:52:04.996Z: Workers have started successfully.
May 27, 2022 2:52:44 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:52:42.363Z: Finished operation
BeamPushDownIOSourceRel_272/Read Input BQ Rows with
push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
May 27, 2022 2:52:44 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:52:42.541Z: Cleaning up.
May 27, 2022 2:52:44 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:52:42.635Z: Stopping worker pool...
May 27, 2022 2:53:21 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:53:21.029Z: Autoscaling: Resized worker pool from 5 to
0.
May 27, 2022 2:53:21 AM
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2022-05-27T02:53:21.213Z: Worker pool stopped.
May 27, 2022 2:53:26 AM
org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
INFO: Job 2022-05-26_19_50_50-3676820533624248379 finished with status DONE.
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDirectReadMethodPushDown STANDARD_OUT
Load test results for test (ID): e52178fc-ead3-4cda-bd4f-8cd174351785 and
timestamp: 2022-05-27T02:53:26.865000000Z:
Metric: Value:
fields_read 4375276.0
read_time 10.965
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
> readUsingDirectReadMethodPushDown STANDARD_ERROR
May 27, 2022 2:53:26 AM
org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
WARNING: Missing property -- measurement/database. Metrics won't be
published.
Gradle Test Executor 4 finished executing tests.
> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into:
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into:
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker
for ':' Thread 11,5,main]) completed. Took 2 mins 56.844 secs.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task
':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at:
> file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings
and determine if they come from your own scripts or plugins.
See
https://docs.gradle.org/7.4/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 9m 4s
165 actionable tasks: 110 executed, 53 from cache, 2 up-to-date
Publishing build scan...
https://gradle.com/s/yuv4ztd62c57k
Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]