See 
<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/122/display/redirect?page=changes>

Changes:

[Moritz Mack] [Spark RDD runner] Remove obsolete unusable 
AggregatorsAccumulator /

[Moritz Mack] [Spark Dataset runner] Remove obsolete unusable 
AggregatorsAccumulator /

[Kenneth Knowles] Suppress checker framework memory constraint warning

[noreply] Updating connector page for BigtableIO

[noreply] Release automation  - Publish GitHub release notes (#24439)

[noreply] Release Automation - Update Python Dependencies (#24358)

[noreply] [Github Actions] - Choose RC Commit Workflow (#24075)


------------------------------------------
[...truncated 338.11 KB...]
Task ':sdks:java:io:google-cloud-platform:compileTestJava' is not up-to-date 
because:
  No history is available.
Loaded cache entry for task 
':sdks:java:io:google-cloud-platform:compileTestJava' with cache key 
e92330599cd35397399fa1f18a964dfa
Resolve mutations for :sdks:java:io:google-cloud-platform:testClasses 
(Thread[Execution **** Thread 4,5,main]) started.
Resolve mutations for :sdks:java:io:google-cloud-platform:testClasses 
(Thread[Execution **** Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:io:google-cloud-platform:testClasses (Thread[included 
builds,5,main]) started.
:sdks:java:io:google-cloud-platform:compileTestJava (Thread[Execution **** 
Thread 7,5,main]) completed. Took 1.094 secs.

> Task :sdks:java:io:google-cloud-platform:testClasses
Skipping task ':sdks:java:io:google-cloud-platform:testClasses' as it has no 
actions.
:sdks:java:io:google-cloud-platform:testClasses (Thread[included 
builds,5,main]) completed. Took 0.0 secs.
Resolve mutations for :sdks:java:io:google-cloud-platform:testJar 
(Thread[Execution **** Thread 4,5,main]) started.
Resolve mutations for :sdks:java:io:google-cloud-platform:testJar 
(Thread[Execution **** Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:io:google-cloud-platform:testJar (Thread[Execution **** Thread 
4,5,main]) started.

> Task :sdks:java:io:google-cloud-platform:testJar
Caching disabled for task ':sdks:java:io:google-cloud-platform:testJar' because:
  Not worth caching
Task ':sdks:java:io:google-cloud-platform:testJar' is not up-to-date because:
  No history is available.
:sdks:java:io:google-cloud-platform:testJar (Thread[Execution **** Thread 
4,5,main]) completed. Took 0.397 secs.
work action resolve beam-sdks-java-io-google-cloud-platform-tests.jar (project 
:sdks:java:io:google-cloud-platform) (Thread[included builds,5,main]) started.
work action null (Thread[included builds,5,main]) completed. Took 0.0 secs.
Resolve mutations for :runners:google-cloud-dataflow-java:compileTestJava 
(Thread[included builds,5,main]) started.
Resolve mutations for :runners:google-cloud-dataflow-java:compileTestJava 
(Thread[included builds,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:compileTestJava (Thread[included 
builds,5,main]) started.

> Task :runners:google-cloud-dataflow-java:compileTestJava FROM-CACHE
Custom actions are attached to task 
':runners:google-cloud-dataflow-java:compileTestJava'.
Build cache key for task ':runners:google-cloud-dataflow-java:compileTestJava' 
is ffbef56d41dc8904d68ab5c12bac2124
Task ':runners:google-cloud-dataflow-java:compileTestJava' is not up-to-date 
because:
  No history is available.
Loaded cache entry for task 
':runners:google-cloud-dataflow-java:compileTestJava' with cache key 
ffbef56d41dc8904d68ab5c12bac2124
:runners:google-cloud-dataflow-java:compileTestJava (Thread[included 
builds,5,main]) completed. Took 0.514 secs.
Resolve mutations for :runners:google-cloud-dataflow-java:testClasses 
(Thread[Execution **** Thread 7,5,main]) started.
Resolve mutations for :runners:google-cloud-dataflow-java:testClasses 
(Thread[Execution **** Thread 7,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:testClasses (Thread[Execution **** Thread 
6,5,main]) started.

> Task :runners:google-cloud-dataflow-java:testClasses UP-TO-DATE
Skipping task ':runners:google-cloud-dataflow-java:testClasses' as it has no 
actions.
:runners:google-cloud-dataflow-java:testClasses (Thread[Execution **** Thread 
6,5,main]) completed. Took 0.0 secs.
Resolve mutations for :runners:google-cloud-dataflow-java:testJar 
(Thread[Execution **** Thread 4,5,main]) started.
Resolve mutations for :runners:google-cloud-dataflow-java:testJar 
(Thread[Execution **** Thread 4,5,main]) completed. Took 0.0 secs.
:runners:google-cloud-dataflow-java:testJar (Thread[Execution **** Thread 
7,5,main]) started.

> Task :runners:google-cloud-dataflow-java:testJar
Caching disabled for task ':runners:google-cloud-dataflow-java:testJar' because:
  Not worth caching
Task ':runners:google-cloud-dataflow-java:testJar' is not up-to-date because:
  No history is available.
file or directory 
'<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/runners/google-cloud-dataflow-java/build/resources/test',>
 not found
:runners:google-cloud-dataflow-java:testJar (Thread[Execution **** Thread 
7,5,main]) completed. Took 0.036 secs.
work action resolve beam-runners-google-cloud-dataflow-java-tests.jar (project 
:runners:google-cloud-dataflow-java) (Thread[Execution **** Thread 2,5,main]) 
started.
work action null (Thread[Execution **** Thread 2,5,main]) completed. Took 0.0 
secs.

> Task :sdks:java:io:synthetic:compileJava
Note: 
<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/synthetic/src/main/java/org/apache/beam/sdk/io/synthetic/SyntheticBoundedSource.java>
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Class dependency analysis for incremental compilation took 0.131 secs.
Created classpath snapshot for incremental compilation in 0.216 secs.
Stored cache entry for task ':sdks:java:io:synthetic:compileJava' with cache 
key 6bd5a2a9c97d4a109f4a43146e6a13cd
:sdks:java:io:synthetic:compileJava (Thread[Execution **** Thread 5,5,main]) 
completed. Took 11.765 secs.
Resolve mutations for :sdks:java:io:synthetic:classes (Thread[Execution **** 
Thread 6,5,main]) started.
Resolve mutations for :sdks:java:io:synthetic:classes (Thread[Execution **** 
Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:io:synthetic:classes (Thread[Execution ****,5,main]) started.

> Task :sdks:java:io:synthetic:classes
Skipping task ':sdks:java:io:synthetic:classes' as it has no actions.
:sdks:java:io:synthetic:classes (Thread[Execution ****,5,main]) completed. Took 
0.0 secs.
Resolve mutations for :sdks:java:io:synthetic:jar (Thread[included 
builds,5,main]) started.
Resolve mutations for :sdks:java:io:synthetic:jar (Thread[included 
builds,5,main]) completed. Took 0.0 secs.
:sdks:java:io:synthetic:jar (Thread[Execution **** Thread 6,5,main]) started.

> Task :sdks:java:io:synthetic:jar
Caching disabled for task ':sdks:java:io:synthetic:jar' because:
  Not worth caching
Task ':sdks:java:io:synthetic:jar' is not up-to-date because:
  No history is available.
file or directory 
'<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/synthetic/build/resources/main',>
 not found
:sdks:java:io:synthetic:jar (Thread[Execution **** Thread 6,5,main]) completed. 
Took 0.015 secs.
work action resolve beam-sdks-java-io-synthetic.jar (project 
:sdks:java:io:synthetic) (Thread[Execution ****,5,main]) started.
work action null (Thread[Execution ****,5,main]) completed. Took 0.0 secs.

> Task :sdks:java:io:sparkreceiver:2:compileJava
Class dependency analysis for incremental compilation took 0.145 secs.
Created classpath snapshot for incremental compilation in 0.559 secs.
Stored cache entry for task ':sdks:java:io:sparkreceiver:2:compileJava' with 
cache key f44edbece5352e8a3007afddfd4bb0f8
:sdks:java:io:sparkreceiver:2:compileJava (Thread[Execution **** Thread 
3,5,main]) completed. Took 11.771 secs.
Resolve mutations for :sdks:java:io:sparkreceiver:2:classes (Thread[Execution 
****,5,main]) started.
Resolve mutations for :sdks:java:io:sparkreceiver:2:classes (Thread[Execution 
****,5,main]) completed. Took 0.0 secs.
:sdks:java:io:sparkreceiver:2:classes (Thread[Execution **** Thread 2,5,main]) 
started.

> Task :sdks:java:io:sparkreceiver:2:classes
Skipping task ':sdks:java:io:sparkreceiver:2:classes' as it has no actions.
:sdks:java:io:sparkreceiver:2:classes (Thread[Execution **** Thread 2,5,main]) 
completed. Took 0.0 secs.
Resolve mutations for :sdks:java:io:sparkreceiver:2:compileTestJava 
(Thread[Execution **** Thread 3,5,main]) started.
Resolve mutations for :sdks:java:io:sparkreceiver:2:compileTestJava 
(Thread[Execution **** Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:io:sparkreceiver:2:compileTestJava (Thread[Execution ****,5,main]) 
started.

> Task :sdks:java:io:sparkreceiver:2:compileTestJava
Custom actions are attached to task 
':sdks:java:io:sparkreceiver:2:compileTestJava'.
Build cache key for task ':sdks:java:io:sparkreceiver:2:compileTestJava' is 
a59b55e73ca307ee08fda7bb9791800a
Task ':sdks:java:io:sparkreceiver:2:compileTestJava' is not up-to-date because:
  No history is available.
The input changes require a full rebuild for incremental task 
':sdks:java:io:sparkreceiver:2:compileTestJava'.
Full recompilation is required because no incremental change information is 
available. This is usually caused by clean builds or changing compiler 
arguments.
Compiling with toolchain '/usr/lib/jvm/java-8-openjdk-amd64'.
Compiling with JDK Java compiler API.
Class dependency analysis for incremental compilation took 0.005 secs.
Created classpath snapshot for incremental compilation in 0.419 secs.
Stored cache entry for task ':sdks:java:io:sparkreceiver:2:compileTestJava' 
with cache key a59b55e73ca307ee08fda7bb9791800a
:sdks:java:io:sparkreceiver:2:compileTestJava (Thread[Execution ****,5,main]) 
completed. Took 3.357 secs.
Resolve mutations for :sdks:java:io:sparkreceiver:2:testClasses 
(Thread[Execution **** Thread 2,5,main]) started.
Resolve mutations for :sdks:java:io:sparkreceiver:2:testClasses 
(Thread[Execution **** Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:io:sparkreceiver:2:testClasses (Thread[Execution **** Thread 
4,5,main]) started.

> Task :sdks:java:io:sparkreceiver:2:testClasses
Skipping task ':sdks:java:io:sparkreceiver:2:testClasses' as it has no actions.
:sdks:java:io:sparkreceiver:2:testClasses (Thread[Execution **** Thread 
4,5,main]) completed. Took 0.0 secs.
Resolve mutations for :sdks:java:io:sparkreceiver:2:integrationTest 
(Thread[Execution **** Thread 4,5,main]) started.
Resolve mutations for :sdks:java:io:sparkreceiver:2:integrationTest 
(Thread[Execution **** Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:io:sparkreceiver:2:integrationTest (Thread[Execution **** Thread 
2,5,main]) started.
producer locations for task group 0 (Thread[Execution **** Thread 4,5,main]) 
started.
producer locations for task group 0 (Thread[Execution **** Thread 4,5,main]) 
completed. Took 0.0 secs.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:io:sparkreceiver:2:integrationTest
Custom actions are attached to task 
':sdks:java:io:sparkreceiver:2:integrationTest'.
Build cache key for task ':sdks:java:io:sparkreceiver:2:integrationTest' is 
5048296d496267dbcec2582e9c5ed591
Task ':sdks:java:io:sparkreceiver:2:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: 
<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/sparkreceiver/2>
 Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java 
-DbeamTestPipelineOptions=["--tempRoot=gs://temp-storage-for-perf-tests","--project=apache-beam-testing","--runner=DataflowRunner","--sourceOptions={\"numRecords\":\"5000000\",\"keySizeBytes\":\"1\",\"valueSizeBytes\":\"90\"}","--bigQueryDataset=beam_performance","--bigQueryTable=sparkreceiverioit_results","--influxMeasurement=sparkreceiverioit_results","--influxDatabase=beam_test_metrics","--influxHost=http://10.128.0.96:8086","--rabbitMqBootstrapServerAddress=amqp://guest:[email protected]:5672","--streamName=rabbitMqTestStream","--readTimeout=1800","--numWorkers=5","--autoscalingAlgorithm=NONE","--****HarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/runners/google-cloud-dataflow-java/****/legacy-****/build/libs/beam-runners-google-cloud-dataflow-java-legacy-****-2.45.0-SNAPSHOT.jar","--region=us-central1";]>
 
-Djava.security.manager=****.org.gradle.process.internal.****.child.BootstrapSecurityManager
 
-Dorg.gradle.internal.****.tmpdir=<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/sparkreceiver/2/build/tmp/integrationTest/work>
 -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US 
-Duser.language=en -Duser.variant -ea -cp 
/home/jenkins/.gradle/caches/7.5.1/****Main/gradle-****.jar 
****.org.gradle.process.internal.****.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIOIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in 
[jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.30/e606eac955f55ecf1d8edcccba04eb8ac98088dd/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in 
[jar:<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/runners/google-cloud-dataflow-java/****/legacy-****/build/libs/beam-runners-google-cloud-dataflow-java-legacy-****-2.45.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]

org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIOIT > 
testSparkReceiverIOReadsInStreamingWithOffset STANDARD_ERROR
    [Test ****] INFO org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIOIT - 
5000000 records were successfully written to RabbitMQ
    [Test ****] INFO org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIO - 
ReadFromSparkReceiverWithOffsetDoFn started reading
    [Test ****] WARN org.apache.beam.runners.dataflow.DataflowRunner - Prefer 
--sdkContainerImage over deprecated legacy option --****HarnessContainerImage.
    [Test ****] INFO 
org.apache.beam.sdk.extensions.gcp.options.GcpOptions$GcpTempLocationFactory - 
No tempLocation specified, attempting to use default bucket: 
dataflow-staging-us-central1-844138762903
    [Test ****] WARN 
org.apache.beam.sdk.extensions.gcp.util.RetryHttpRequestInitializer - Request 
failed with code 409, performed 0 retries due to IOExceptions, performed 0 
retries due to unsuccessful status codes, HTTP framework says request can be 
retried, (caller responsible for retrying): 
https://storage.googleapis.com/storage/v1/b?predefinedAcl=projectPrivate&predefinedDefaultObjectAcl=projectPrivate&project=apache-beam-testing.
 
    [Test ****] INFO 
org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory
 - No stagingLocation provided, falling back to gcpTempLocation
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - 
PipelineOptions.filesToStage was not specified. Defaulting to files from the 
classpath: will stage 425 files. Enable logging at DEBUG level to see which 
files will be staged.
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - 
Executing pipeline on the Dataflow Service, which will have billing 
implications related to Google Compute Engine usage and other Google Cloud 
Services.
    [Test ****] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Uploading 426 files from PipelineOptions.filesToStage to staging location to 
prepare for execution.
    [pool-8-thread-1] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Staging custom dataflow-****.jar as 
beam-runners-google-cloud-dataflow-java-legacy-****-2.45.0-SNAPSHOT-xFm19h2aly_to9IDokH-dsWOLthpsePjrUyQfkm1Arw.jar
    [pool-8-thread-4] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Uploading /tmp/test3068496832685009748.zip to 
gs://dataflow-staging-us-central1-844138762903/temp/staging/test-Y4Cz3FcmaiIWPZLqzG_-Y09tVgqNAsIkYGtD0H5Qs4Q.jar
    [pool-8-thread-21] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Uploading /tmp/test3241555190772990967.zip to 
gs://dataflow-staging-us-central1-844138762903/temp/staging/test-_gVlD97Dne6RMNItfHl-bFbQ0d_pbgnLRPy_MAQGfuM.jar
    [pool-8-thread-1] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Uploading /tmp/main5393108142772844520.zip to 
gs://dataflow-staging-us-central1-844138762903/temp/staging/main-lU28qdkWSU7nsG309MepHEUW1Pt0arqhke_CwnjFN44.jar
    [Test ****] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Staging files complete: 423 files cached, 3 files newly uploaded in 1 seconds
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - Staging 
portable pipeline proto to 
gs://dataflow-staging-us-central1-844138762903/temp/staging/
    [pool-15-thread-1] INFO org.apache.beam.runners.dataflow.util.PackageUtil - 
Uploading <160072 bytes, hash 
4b73aafad5685fdd7254bb676c1d82d66ab779dfca172ceff3042ce49f2eb109> to 
gs://dataflow-staging-us-central1-844138762903/temp/staging/pipeline-S3Oq-tVoX91yVLtnbB2C1mq3ed_KFyzv8wQs5J8usQk.pb
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/Impulse as step 
s1
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Pair
 with initial restriction as step s2
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Split
 restriction as step s3
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Explode
 windows as step s4
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Assign
 unique key/AddKeys/Map as step s5
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/ProcessKeyedElements
 as step s6
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Measure 
read time as step s7
    [Test ****] INFO 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator - Adding Counting 
element as step s8
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - Dataflow 
SDK version: 2.45.0-SNAPSHOT
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - To 
access the Dataflow monitoring console, please navigate to 
https://console.cloud.google.com/dataflow/jobs/us-central1/2022-12-12_18_34_05-2346558958198319365?project=apache-beam-testing
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - 
Submitted job: 2022-12-12_18_34_05-2346558958198319365
    [Test ****] INFO org.apache.beam.runners.dataflow.DataflowRunner - To 
cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel 
--region=us-central1 2022-12-12_18_34_05-2346558958198319365
    [Test ****] WARN 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:08.743Z: The workflow name is not a valid Cloud Label. Labels 
applied to Cloud resources (such as GCE Instances) for monitoring will be 
labeled with this modified job name: 
sparkreceiverioit0testsparkreceiverioreadsinstreamingwitho-uw18. For the best 
monitoring experience, please name your job with a valid Cloud Label. For 
details, see: 
https://cloud.google.com/compute/docs/labeling-resources#restrictions
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:27.151Z: Worker configuration: e2-standard-4 in us-central1-b.
    [Test ****] WARN 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.170Z: Querying the configuration of Pub/Sub subscription 
_starting_signal/ failed.  If the ack deadline on the subscription has been 
changed from the default value, this may affect performance.  Also, this 
prevents Dataflow from checking for unsupported settings on the subscription.  
See 
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#accessing_pubsub
 for further details.  If the error is not correctable, the job may be updated 
with a valid Pub/Sub configuration.  Specific error: INVALID_ARGUMENT:  
[type.googleapis.com/util.MessageSetPayload='[dist_proc.dax.internal.StackTraceProto]
 { stack_top_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } address: 94640138499019 address: 94640322486194 address: 
94640322452115 address: 94640322439361 address: 94640322443243 address: 
94640322923130 address: 94640258654408 address: 94640281932238 address: 
94640229442746 address: 140661619333081 address: 140661618763423 } 
[dist_proc.dax.internal.TrailProto] { trail_point { source_file_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } } } [dist_proc.dax.MessageCode] { argument { string_val: 
"_starting_signal/" key: "subscription_name" } argument { string_val: "Invalid 
resource name given (name=_starting_signal/). Refer to 
https://cloud.google.com/pubsub/docs/admin#resource_names for more 
information." key: "error" } origin_id: 3608160163672915842 
[dist_proc.dax.pubsub_resource_message_ext]: GETTING_PUBSUB_SUBSCRIPTION_FAILED 
}']
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.292Z: Expanding CoGroupByKey operations into optimizable 
parts.
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.327Z: Expanding SplittableProcessKeyed operations into 
optimizable parts.
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.366Z: Expanding GroupByKey operations into streaming 
Read/Write steps
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.433Z: Lifting ValueCombiningMappingFns into 
MergeBucketsMappingFns
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.506Z: Fusing adjacent ParDo, Read, Write, and Flatten 
operations
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.536Z: Fusing consumer Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Pair
 with initial restriction into Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/Impulse
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.569Z: Fusing consumer Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Split
 restriction into Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Pair
 with initial restriction
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.590Z: Fusing consumer Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Explode
 windows into Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Split
 restriction
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.625Z: Fusing consumer Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Assign
 unique key/AddKeys/Map into Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Explode
 windows
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.652Z: Fusing consumer s6/GroupByKeyRaw/WriteStream into 
Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Assign
 unique key/AddKeys/Map
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.675Z: Fusing consumer s6/SplittableProcess into 
s6/GroupByKeyRaw/ReadStream
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.706Z: Fusing consumer Measure read time into 
s6/SplittableProcess
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:29.739Z: Fusing consumer Counting element into Measure read 
time
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:30.141Z: Executing operation 
s6/GroupByKeyRaw/ReadStream+s6/SplittableProcess+Measure read time+Counting 
element
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:30.230Z: Starting 5 ****s in us-central1-b...
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:30.762Z: Executing operation Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/Impulse+Read from 
unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Pair
 with initial restriction+Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Split
 restriction+Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Explode
 windows+Read from unbounded 
RabbitMq/SparkReceiverIO.ReadFromSparkReceiverViaSdf/ParDo(ReadFromSparkReceiverWithOffset)/ParMultiDo(ReadFromSparkReceiverWithOffset)/Assign
 unique key/AddKeys/Map+s6/GroupByKeyRaw/WriteStream
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:34:40.127Z: Your project already contains 100 Dataflow-created 
metric descriptors, so new user metrics of the form custom.googleapis.com/* 
will not be created. However, all user metrics are also available in the metric 
dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, 
you can delete old / unused metric descriptors. See 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
 and 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:35:16.112Z: Autoscaling: Raised the number of ****s to 5 so that 
the pipeline can catch up with its backlog and keep up with its input rate.
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:35:48.515Z: Workers have started successfully.
    [Test ****] INFO 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:35:50.144Z: All ****s have finished the startup processes and 
began to receive work requests.
    [Test ****] ERROR 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:36:37.904Z: java.lang.IllegalArgumentException: Trying to claim 
offset 200 while last attempted was 599
            
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkArgument(Preconditions.java:440)
            
org.apache.beam.sdk.transforms.splittabledofn.OffsetRangeTracker.tryClaim(OffsetRangeTracker.java:92)
            
org.apache.beam.sdk.transforms.splittabledofn.OffsetRangeTracker.tryClaim(OffsetRangeTracker.java:38)
            
org.apache.beam.runners.dataflow.****.repackaged.org.apache.beam.sdk.fn.splittabledofn.RestrictionTrackers$RestrictionTrackerObserver.tryClaim(RestrictionTrackers.java:59)
            
org.apache.beam.sdk.io.sparkreceiver.ReadFromSparkReceiverWithOffsetDoFn.processElement(ReadFromSparkReceiverWithOffsetDoFn.java:339)
    [Test ****] WARN 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:46:47.467Z: Querying the configuration of Pub/Sub subscription 
_starting_signal/ failed.  If the ack deadline on the subscription has been 
changed from the default value, this may affect performance.  Also, this 
prevents Dataflow from checking for unsupported settings on the subscription.  
See 
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#accessing_pubsub
 for further details.  If the error is not correctable, the job may be updated 
with a valid Pub/Sub configuration.  Specific error: INVALID_ARGUMENT:  
[type.googleapis.com/util.MessageSetPayload='[dist_proc.dax.internal.StackTraceProto]
 { stack_top_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } address: 94640138499019 address: 94640322486194 address: 
94640322452115 address: 94640322439361 address: 94640322443243 address: 
94640322923130 address: 94640322432584 address: 94640139389717 address: 
94640229442557 address: 140661619333081 address: 140661618763423 } 
[dist_proc.dax.internal.TrailProto] { trail_point { source_file_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } } } [dist_proc.dax.MessageCode] { argument { string_val: 
"_starting_signal/" key: "subscription_name" } argument { string_val: "Invalid 
resource name given (name=_starting_signal/). Refer to 
https://cloud.google.com/pubsub/docs/admin#resource_names for more 
information." key: "error" } origin_id: 11682150624330163884 
[dist_proc.dax.pubsub_resource_message_ext]: GETTING_PUBSUB_SUBSCRIPTION_FAILED 
}']
    [Test ****] WARN 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler - 
2022-12-13T02:58:47.285Z: Querying the configuration of Pub/Sub subscription 
_starting_signal/ failed.  If the ack deadline on the subscription has been 
changed from the default value, this may affect performance.  Also, this 
prevents Dataflow from checking for unsupported settings on the subscription.  
See 
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#accessing_pubsub
 for further details.  If the error is not correctable, the job may be updated 
with a valid Pub/Sub configuration.  Specific error: INVALID_ARGUMENT:  
[type.googleapis.com/util.MessageSetPayload='[dist_proc.dax.internal.StackTraceProto]
 { stack_top_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } address: 94640138499019 address: 94640322486194 address: 
94640322452115 address: 94640322439361 address: 94640322443243 address: 
94640322923130 address: 94640322432584 address: 94640139389717 address: 
94640229442557 address: 140661619333081 address: 140661618763423 } 
[dist_proc.dax.internal.TrailProto] { trail_point { source_file_loc { filepath: 
"dist_proc/dax_extensions/workflow/cloud/service/pubsub_subscription_util.cc" 
line: 592 } } } [dist_proc.dax.MessageCode] { argument { string_val: 
"_starting_signal/" key: "subscription_name" } argument { string_val: "Invalid 
resource name given (name=_starting_signal/). Refer to 
https://cloud.google.com/pubsub/docs/admin#resource_names for more 
information." key: "error" } origin_id: 11682150624330165937 
[dist_proc.dax.pubsub_resource_message_ext]: GETTING_PUBSUB_SUBSCRIPTION_FAILED 
}']
    [Test ****] WARN org.apache.beam.runners.dataflow.DataflowPipelineJob - No 
terminal state was returned within allotted timeout. State value RUNNING

org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIOIT > 
testSparkReceiverIOReadsInStreamingWithOffset FAILED
    java.lang.AssertionError: expected:<5000000> but was:<2468150>
        at org.junit.Assert.fail(Assert.java:89)
        at org.junit.Assert.failNotEquals(Assert.java:835)
        at org.junit.Assert.assertEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:633)
        at 
org.apache.beam.sdk.io.sparkreceiver.SparkReceiverIOIT.testSparkReceiverIOReadsInStreamingWithOffset(SparkReceiverIOIT.java:337)

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:io:sparkreceiver:2:integrationTest FAILED

1 test completed, 1 failed
Finished generating test XML results (0.037 secs) into: 
<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/sparkreceiver/2/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.084 secs) into: 
<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/sparkreceiver/2/build/reports/tests/integrationTest>
:sdks:java:io:sparkreceiver:2:integrationTest (Thread[Execution **** Thread 
2,5,main]) completed. Took 42 mins 17.252 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:io:sparkreceiver:2:integrationTest'.
> There were failing tests. See the report at: 
> file://<https://ci-beam.apache.org/job/beam_PerformanceTests_SparkReceiver_IO/ws/src/sdks/java/io/sparkreceiver/2/build/reports/tests/integrationTest/index.html>

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --debug option to get more log output.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 42m 56s
138 actionable tasks: 84 executed, 52 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/zo7wzjdvd7dgk

Stopped 2 **** daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to