See
<https://ci-beam.apache.org/job/beam_PostCommit_Python36/3020/display/redirect?page=changes>
Changes:
[samuelw] [BEAM-11034] Avoid build-up of stateful garbage collection timers for
[tobiasz.kedzierski] [BEAM-11036] Add explanatory comment to PR if GA workflow
gets cancelled
[Luke Cwik] [BEAM-10670, BEAM-11028] Ensure that UnboundedSourceAsSDFWrapperFn
[Luke Cwik] [BEAM-10997] Close currentReader in trySplit
[noreply] Merge pull request #13001 from [BEAM-11041] Matching job creation and
[noreply] Add python schema inference docs (#13005)
------------------------------------------
[...truncated 14.65 MB...]
INFO:root:Running
((WriteToText/Write/WriteImpl/GroupByKey/Read)+(ref_AppliedPTransform_WriteToText/Write/WriteImpl/Extract_24))+(ref_PCollection_PCollection_16/Write)
INFO:apache_beam.runners.portability.fn_api_runner.fn_runner:Running
((ref_PCollection_PCollection_10/Read)+(ref_AppliedPTransform_WriteToText/Write/WriteImpl/PreFinalize_25))+(ref_PCollection_PCollection_17/Write)
INFO:root:Running
((ref_PCollection_PCollection_10/Read)+(ref_AppliedPTransform_WriteToText/Write/WriteImpl/PreFinalize_25))+(ref_PCollection_PCollection_17/Write)
INFO:apache_beam.runners.portability.fn_api_runner.fn_runner:Running
(ref_PCollection_PCollection_10/Read)+(ref_AppliedPTransform_WriteToText/Write/WriteImpl/FinalizeWrite_26)
INFO:root:Running
(ref_PCollection_PCollection_10/Read)+(ref_AppliedPTransform_WriteToText/Write/WriteImpl/FinalizeWrite_26)
INFO:root:severity: INFO
timestamp {
seconds: 1602116150
nanos: 912749052
}
message: "Starting finalize_write threads with num_shards: 1 (skipped: 0),
batches: 1, num_threads: 1"
instruction_id: "bundle_6"
transform_id: "WriteToText/Write/WriteImpl/FinalizeWrite"
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/io/filebasedsink.py:310"
thread: "Thread-14"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 59678792
}
message: "Renamed 1 shards in 0.15 seconds."
instruction_id: "bundle_6"
transform_id: "WriteToText/Write/WriteImpl/FinalizeWrite"
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/io/filebasedsink.py:355"
thread: "Thread-14"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 88979959
}
message: "No more requests from control plane"
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:264"
thread: "MainThread"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 89255094
}
message: "SDK Harness waiting for in-flight requests to complete"
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:265"
thread: "MainThread"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 89378833
}
message: "Closing all cached grpc data channels."
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/data_plane.py:721"
thread: "MainThread"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 89500904
}
message: "Closing all cached gRPC state handlers."
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:868"
thread: "MainThread"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 91191768
}
message: "Done consuming work."
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:277"
thread: "MainThread"
INFO:root:severity: INFO
timestamp {
seconds: 1602116151
nanos: 91573953
}
message: "Python sdk harness exiting."
log_location:
"/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker_main.py:162"
thread: "MainThread"
INFO:apache_beam.runners.portability.local_job_service:Successfully completed
job in 11.597166538238525 seconds.
INFO:root:Successfully completed job in 11.597166538238525 seconds.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
> Task :sdks:python:test-suites:portable:py36:portableWordCountSparkRunnerBatch
INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at
localhost:42179
WARNING:root:Make sure that locally built Python SDK docker image has Python
3.6 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python3.6_sdk:2.26.0.dev.
If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
<function lift_combiners at 0x7f6cba43a950> ====================
INFO:apache_beam.utils.subprocess_server:Starting service with ['java' '-jar'
'<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/runners/spark/job-server/build/libs/beam-runners-spark-job-server-2.26.0-SNAPSHOT.jar'>
'--spark-master-url' 'local[4]' '--artifacts-dir'
'/tmp/beam-temp90zep_t_/artifactsgemzf29a' '--job-port' '57799'
'--artifact-port' '0' '--expansion-port' '0']
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:00 INFO
org.apache.beam.runners.jobsubmission.JobServerDriver: ArtifactStagingService
started on localhost:46165'
WARNING:root:Waiting for grpc channel to be ready at localhost:57799.
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:00 INFO
org.apache.beam.runners.jobsubmission.JobServerDriver: Java ExpansionService
started on localhost:44455'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:00 INFO
org.apache.beam.runners.jobsubmission.JobServerDriver: JobService started on
localhost:57799'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:00 INFO
org.apache.beam.runners.jobsubmission.JobServerDriver: Job server now running,
terminate with Ctrl+C'
WARNING:root:Waiting for grpc channel to be ready at localhost:57799.
WARNING:root:Waiting for grpc channel to be ready at localhost:57799.
WARNING:root:Waiting for grpc channel to be ready at localhost:57799.
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args:
['--parallelism=2']
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:05 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Staging
artifacts for job_e88c10c8-a415-4d3c-b90f-b8e6b0044161.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:05 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Resolving
artifacts for
job_e88c10c8-a415-4d3c-b90f-b8e6b0044161.ref_Environment_default_environment_1.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:05 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Getting 1
artifacts for job_e88c10c8-a415-4d3c-b90f-b8e6b0044161.null.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:05 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Artifacts
fully staged for job_e88c10c8-a415-4d3c-b90f-b8e6b0044161.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:05 INFO
org.apache.beam.runners.spark.SparkJobInvoker: Invoking job
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:06 INFO
org.apache.beam.runners.jobsubmission.JobInvocation: Starting job invocation
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505'
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has
started a component necessary for the execution. Be sure to run the pipeline
using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
RUNNING
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:07 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: PipelineOptions.filesToStage
was not specified. Defaulting to files from the classpath'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:07 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: Will stage 7 files. (Enable
logging at DEBUG level to see which files will be staged.)'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:07 INFO
org.apache.beam.runners.spark.translation.SparkContextFactory: Creating a brand
new Spark Context.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:08 WARN
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:11 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: Running job
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505 on
Spark master local[4]'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:11 INFO
org.apache.beam.runners.spark.aggregators.AggregatorsAccumulator: Instantiated
aggregators accumulator:'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:11 INFO
org.apache.beam.runners.spark.metrics.MetricsAccumulator: Instantiated metrics
accumulator: MetricQueryResults()'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:11 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: Running job
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505 on
Spark master local[4]'
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel
for localhost:32787.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with
unbounded number of workers.
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:16 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService: Beam
Fn Control client connected with id 1-1'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:16 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-2'
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for
localhost:35039.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for
localhost:38573
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:16 INFO
org.apache.beam.runners.fnexecution.data.GrpcDataService: Beam Fn Data client
connected.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:16 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-3'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 WARN
org.apache.beam.runners.spark.translation.GroupNonMergingWindowsFunctions:
Either coder LengthPrefixCoder(ByteArrayCoder) or GlobalWindow$Coder is not
consistent with equals. That might cause issues on some runners.'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-4'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-5'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-7'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-9'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-8'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-6'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-10'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-11'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-12'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-13'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-14'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-15'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: Job
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505:
Pipeline translated successfully. Computing outputs'
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:17 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 1-16'
INFO:apache_beam.io.filebasedsink:Starting finalize_write threads with
num_shards: 4 (skipped: 0), batches: 4, num_threads: 4
INFO:apache_beam.io.filebasedsink:Renamed 4 shards in 0.13 seconds.
INFO:apache_beam.utils.subprocess_server:b'20/10/08 00:16:18 INFO
org.apache.beam.runners.spark.SparkPipelineRunner: Job
BeamApp-jenkins-1008001605-ef5464f9_99693ea3-372b-4662-93c6-a5412246e505
finished.'
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data
plane.
Traceback (most recent call last):
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/apache_beam/runners/worker/data_plane.py",>
line 581, in _read_inputs
for elements in elements_iterator:
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 416, in __next__
return self._next()
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 706, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that
terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string =
"{"created":"@1602116178.647469412","description":"Error received from peer
ipv4:127.0.0.1:38573","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Socket
closed","grpc_status":14}"
>
Exception in thread read_state:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/apache_beam/runners/worker/sdk_worker.py",>
line 948, in pull_responses
for response in responses:
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 416, in __next__
return self._next()
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 706, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that
terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string =
"{"created":"@1602116178.647485385","description":"Error received from peer
ipv4:127.0.0.1:35039","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Socket
closed","grpc_status":14}"
>
Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/apache_beam/runners/worker/data_plane.py",>
line 598, in <lambda>
target=lambda: self._read_inputs(elements_iterator),
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/apache_beam/runners/worker/data_plane.py",>
line 581, in _read_inputs
for elements in elements_iterator:
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 416, in __next__
return self._next()
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 706, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that
terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string =
"{"created":"@1602116178.647469412","description":"Error received from peer
ipv4:127.0.0.1:38573","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Socket
closed","grpc_status":14}"
>
Exception in thread run_worker_1-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/apache_beam/runners/worker/sdk_worker.py",>
line 254, in run
for work_request in self._control_stub.Control(get_responses()):
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 416, in __next__
return self._next()
File
"<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/build/gradleenv/2022703440/lib/python3.6/site-packages/grpc/_channel.py",>
line 706, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that
terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string =
"{"created":"@1602116178.647504087","description":"Error received from peer
ipv4:127.0.0.1:32787","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Socket
closed","grpc_status":14}"
>
> Task :sdks:python:test-suites:portable:py36:postCommitPy36
FAILURE: Build failed with an exception.
* Where:
Script
'<https://ci-beam.apache.org/job/beam_PostCommit_Python36/ws/src/sdks/python/test-suites/dataflow/common.gradle'>
line: 118
* What went wrong:
Execution failed for task ':sdks:python:test-suites:dataflow:py36:postCommitIT'.
> Process 'command 'sh'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See
https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 15m 54s
171 actionable tasks: 142 executed, 25 from cache, 4 up-to-date
Publishing build scan...
https://gradle.com/s/dim6lkr77akae
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]