See 
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/1547/display/redirect?page=changes>

Changes:

[Ismaël Mejía] [website] Clarify JIRA priority for security issues (CVEs)


------------------------------------------
[...truncated 391.65 KB...]
    SLF4J: Found binding in 
[jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.29.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in 
[jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDirectReadMethod STANDARD_ERROR
    Jan 30, 2021 6:44:56 PM 
org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory
 create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Jan 30, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner 
fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files 
from the classpath: will stage 228 files. Enable logging at DEBUG level to see 
which files will be staged.
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, 
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND 
`HACKER_NEWS`.`score` > 2
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:44:59 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], 
expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], 
expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], 
expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], 
$condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for 
BeamIOSourceRel_3/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output 
[PCollection@758046857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder 
for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: 
PTransform.getOutputCoder called.
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:280)
        at 
org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at 
org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:153)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:546)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:498)
        at 
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:72)
        at 
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDefaultMethod STANDARD_ERROR
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, 
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND 
`HACKER_NEWS`.`score` > 2
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2021 6:45:00 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], 
expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], 
expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], 
expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], 
$condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for 
BeamIOSourceRel_95/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output 
[PCollection@816948552]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder 
for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: 
PTransform.getOutputCoder called.
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:280)
        at 
org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at 
org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:153)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:546)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:498)
        at 
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:72)
        at 
org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, 
`HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND 
`HACKER_NEWS`.`score` > 2
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..3=[{inputs}], proj#0..3=[{exprs}])
      BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, 
type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 
'job'))>($5, 2)}, unsupported{}]])

    Jan 30, 2021 6:45:01 PM 
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable 
buildIOReader
    INFO: Pushing down the following filter: (`type` = 'story' OR `type` = 
'job') AND `score` > 2
    Jan 30, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing 
implications related to Google Compute Engine usage and other Google Cloud 
Services.
    Jan 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil 
stageClasspathElements
    INFO: Uploading 229 files from PipelineOptions.filesToStage to staging 
location to prepare for execution.
    Jan 30, 2021 6:45:05 PM 
org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes 
forFileToStage
    INFO: Staging custom dataflow-worker.jar as 
beam-runners-google-cloud-dataflow-java-legacy-worker-2.29.0-SNAPSHOT-hcrHl3nIcMxtK6uIC61YfEhyUatJnS8yaBhg51-cb3k.jar
    Jan 30, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
    INFO: Uploading 
/home/jenkins/.gradle/caches/6.8/workerMain/gradle-worker.jar to 
gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-9zzjInCB8StHeYkrV6AW3xFkZhIt0mYx2RCXliCw7CA.jar
    Jan 30, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
    INFO: Uploading /tmp/test1380693237358355222.zip to 
gs://temp-storage-for-perf-tests/loadtests/staging/test-9tqy4E5hMsFr2lB8q9hDtYexxNsI4Xq4rkwQB2CN1hg.jar
    Jan 30, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
    INFO: Uploading 
/home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.oauth-client/google-oauth-client-java6/1.31.0/9a08719a6ce044211203d9ab3fccc2514d254998/google-oauth-client-java6-1.31.0.jar
 to 
gs://temp-storage-for-perf-tests/loadtests/staging/google-oauth-client-java6-1.31.0-nGYS21dwrwkEoUPYFWsIkA4BWwYgS-D7Dsqi_bA4tdU.jar
    Jan 30, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
    INFO: Uploading 
/home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.oauth-client/google-oauth-client/1.31.0/bf1cfbbaa2497d0a841ea0363df4a61170d5823b/google-oauth-client-1.31.0.jar
 to 
gs://temp-storage-for-perf-tests/loadtests/staging/google-oauth-client-1.31.0-9fiaR9DCEOJ3VPxypLVvfxhohPwMB0pyoLY-Gc52QtA.jar
    Jan 30, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil 
stageClasspathElements
    INFO: Staging files complete: 225 files cached, 4 files newly uploaded in 0 
seconds
    Jan 30, 2021 6:45:05 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_229/Read Input BQ Rows with 
push-down/Read(BigQueryStorageTableSource) as step s1
    Jan 30, 2021 6:45:05 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_229/ParDo(RowMonitor) as step s2
    Jan 30, 2021 6:45:05 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamCalcRel_285/ParDo(Calc) as step s3
    Jan 30, 2021 6:45:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s4
    Jan 30, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging pipeline description to 
gs://temp-storage-for-perf-tests/loadtests/staging/
    Jan 30, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
    INFO: Uploading <100353 bytes, hash 
4f2bae7efc235011a93505efe590b34c0c3c752c2177686bf81204fa5d2dcf34> to 
gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-TyuufvwjUBGpNQXv5ZCzTAw8dSwhd2hr-BIE-l0tzzQ.pb
    Jan 30, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.29.0-SNAPSHOT
    Jan 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to 
https://console.cloud.google.com/dataflow/jobs/us-central1/2021-01-30_10_45_06-5892527343795041632?project=apache-beam-testing
    Jan 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-01-30_10_45_06-5892527343795041632
    Jan 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel 
--region=us-central1 2021-01-30_10_45_06-5892527343795041632
    Jan 30, 2021 6:45:08 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-01-30T18:45:06.527Z: The requested max number of workers (5) 
is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:15.977Z: Worker configuration: n1-standard-1 in 
us-central1-f.
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.704Z: Expanding CoGroupByKey operations into 
optimizable parts.
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.771Z: Expanding GroupByKey operations into 
optimizable parts.
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.794Z: Lifting ValueCombiningMappingFns into 
MergeBucketsMappingFns
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.858Z: Fusing adjacent ParDo, Read, Write, and 
Flatten operations
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.893Z: Fusing consumer 
BeamPushDownIOSourceRel_229/ParDo(RowMonitor) into 
BeamPushDownIOSourceRel_229/Read Input BQ Rows with 
push-down/Read(BigQueryStorageTableSource)
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.925Z: Fusing consumer BeamCalcRel_285/ParDo(Calc) 
into BeamPushDownIOSourceRel_229/ParDo(RowMonitor)
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:16.951Z: Fusing consumer ParDo(TimeMonitor) into 
BeamCalcRel_285/ParDo(Calc)
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:17.307Z: Executing operation 
BeamPushDownIOSourceRel_229/Read Input BQ Rows with 
push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_229/ParDo(RowMonitor)+BeamCalcRel_285/ParDo(Calc)+ParDo(TimeMonitor)
    Jan 30, 2021 6:45:18 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:17.375Z: Starting 5 workers in us-central1-f...
    Jan 30, 2021 6:45:53 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:51.017Z: Your project already contains 100 
Dataflow-created metric descriptors, so new user metrics of the form 
custom.googleapis.com/* will not be created. However, all user metrics are also 
available in the metric dataflow.googleapis.com/job/user_counter. If you rely 
on the custom metrics, you can delete old / unused metric descriptors. See 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
 and 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Jan 30, 2021 6:45:53 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:52.665Z: Autoscaling: Raised the number of workers 
to 1 based on the rate of progress in the currently running stage(s).
    Jan 30, 2021 6:45:53 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:45:52.695Z: Resized worker pool to 1, though goal was 
5.  This could be a quota issue.
    Jan 30, 2021 6:46:04 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:03.103Z: Autoscaling: Raised the number of workers 
to 5 based on the rate of progress in the currently running stage(s).
    Jan 30, 2021 6:46:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:17.838Z: Workers have started successfully.
    Jan 30, 2021 6:46:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:17.870Z: Workers have started successfully.
    Jan 30, 2021 6:46:48 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:47.082Z: Finished operation 
BeamPushDownIOSourceRel_229/Read Input BQ Rows with 
push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_229/ParDo(RowMonitor)+BeamCalcRel_285/ParDo(Calc)+ParDo(TimeMonitor)
    Jan 30, 2021 6:46:48 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:47.217Z: Cleaning up.
    Jan 30, 2021 6:46:48 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:46:47.311Z: Stopping worker pool...
    Jan 30, 2021 6:47:34 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:47:34.042Z: Autoscaling: Resized worker pool from 5 to 
0.
    Jan 30, 2021 6:47:34 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-01-30T18:47:34.083Z: Worker pool stopped.
    Jan 30, 2021 6:47:45 PM 
org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-01-30_10_45_06-5892527343795041632 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9832a483-f2b7-423b-b822-5a6c64435c38 and 
timestamp: 2021-01-30T18:47:45.225000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.604

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT 
> readUsingDirectReadMethodPushDown STANDARD_ERROR
    Jan 30, 2021 6:47:45 PM 
org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be 
published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: 
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: 
<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
Watching 2248 directories to track changes
Watching 2254 directories to track changes
Watching 2255 directories to track changes
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker 
for ':' Thread 7,5,main]) completed. Took 2 mins 52.946 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task 
':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: 
> file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to 
get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See 
https://docs.gradle.org/6.8/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 3m 28s
148 actionable tasks: 93 executed, 55 from cache
Watching 2255 directories to track changes

Publishing build scan...
https://gradle.com/s/s5rmdnc3vzl5y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to