See 
<https://builds.apache.org/job/beam_PostCommit_PortableJar_Flink/944/display/redirect?page=changes>

Changes:

[ehudm] [BEAM-8269] Convert from_callable type hints to Beam types

[ehudm] Fix _get_args for typing.Tuple in Py3.5.2

[ehudm] Fix cleanPython race with :clean

[ehudm] Dicts are not valid DoFn.process return values

[chamikara] Makes environment ID a top level attribute of PTransform.

[angoenka] [BEAM-8944] Change to use single thread in py sdk bundle progress 
report

[aaltay] [BEAM-8335] Background caching job (#10405)


------------------------------------------
[...truncated 334.15 KB...]
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) switched from SCHEDULED to DEPLOYING.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) (attempt #0) to 
325a6eb3-98ab-4fa3-a8fd-5b61695e4b11 @ localhost (dataPort=-1)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Received task MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1).
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - MapPartition (MapPartition at 
[3]assert_that/{Group, Unkey, Match}) (1/1) (d67f6f3c79bb9d29f357065f7f310682) 
switched from CREATED to DEPLOYING.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream 
leak safety net for task MapPartition (MapPartition at [3]assert_that/{Group, 
Unkey, Match}) (1/1) (d67f6f3c79bb9d29f357065f7f310682) [DEPLOYING]
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task 
MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) [DEPLOYING].
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: 
MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) [DEPLOYING].
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - MapPartition (MapPartition at 
[3]assert_that/{Group, Unkey, Match}) (1/1) (d67f6f3c79bb9d29f357065f7f310682) 
switched from DEPLOYING to RUNNING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) switched from DEPLOYING to RUNNING.
[GroupReduce (GroupReduce at assert_that/Group/GroupByKey) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - GroupReduce (GroupReduce at 
assert_that/Group/GroupByKey) (1/1) (5646d554418196e9ac8335f7cbe3435c) switched 
from RUNNING to FINISHED.
[GroupReduce (GroupReduce at assert_that/Group/GroupByKey) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Freeing task resources for 
GroupReduce (GroupReduce at assert_that/Group/GroupByKey) (1/1) 
(5646d554418196e9ac8335f7cbe3435c).
[GroupReduce (GroupReduce at assert_that/Group/GroupByKey) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are 
closed for task GroupReduce (GroupReduce at assert_that/Group/GroupByKey) (1/1) 
(5646d554418196e9ac8335f7cbe3435c) [FINISHED]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and 
sending final execution state FINISHED to JobManager for task GroupReduce 
(GroupReduce at assert_that/Group/GroupByKey) 5646d554418196e9ac8335f7cbe3435c.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupReduce 
(GroupReduce at assert_that/Group/GroupByKey) (1/1) 
(5646d554418196e9ac8335f7cbe3435c) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
WARN org.apache.beam.runners.fnexecution.environment.DockerCommand - Unable to 
pull docker image apachebeam/python3.6_sdk:2.19.0.dev, cause: Received exit 
code 1 for command 'docker pull apachebeam/python3.6_sdk:2.19.0.dev'. stderr: 
Error response from daemon: manifest for apachebeam/python3.6_sdk:2.19.0.dev 
not found
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.artifact.AbstractArtifactRetrievalService - 
GetManifest for BEAM-PIPELINE/pipeline/artifact-manifest.json
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.artifact.AbstractArtifactRetrievalService - 
Manifest at BEAM-PIPELINE/pipeline/artifact-manifest.json has 1 artifact 
locations
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.artifact.AbstractArtifactRetrievalService - 
GetManifest for BEAM-PIPELINE/pipeline/artifact-manifest.json -> 1 artifacts
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.logging.GrpcLoggingService - Beam Fn 
Logging client connected.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker_main.py:106
 - Logging handler created.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker_main.py:132
 - semi_persistent_directory: /tmp
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker_main.py:88
 - Status HTTP server running at localhost:40315
[grpc-default-executor-0] WARN 
/usr/local/lib/python3.6/site-packages/apache_beam/options/pipeline_options.py:287
 - Discarding unparseable args: ['--app_name=None', 
'--direct_runner_use_stacked_bundle', '--job_server_timeout=60', 
'--options_id=1', '--pipeline_type_check', 
'--retrieval_service_type=CLASSLOADER'] 
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker_main.py:144
 - Python sdk harness started with pipeline_options: {'job_name': 
'BeamApp-jenkins-1220023056-759a7d82', 'experiments': ['beam_fn_api'], 
'save_main_session': True, 'sdk_location': 'container', 'environment_type': 
'DOCKER', 'environment_config': 'apachebeam/python3.6_sdk:2.19.0.dev', 
'sdk_worker_parallelism': '1', 'environment_cache_millis': '0', 'job_port': 
'0', 'artifact_port': '0', 'expansion_port': '0', 'flink_job_server_jar': 
'/home/jenkins/jenkins-slave/workspace/beam_PostCommit_PortableJar_Flink/src/runners/flink/1.9/job-server/build/libs/beam-runners-flink-1.9-job-server-2.19.0-SNAPSHOT.jar',
 'flink_submit_uber_jar': True}
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/statecache.py:137
 - Creating state cache with size 0
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:85
 - Creating insecure control channel for localhost:33027.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:93
 - Control channel established.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:114
 - Initializing SDKHarness with unbounded number of workers.
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService - 
Beam Fn Control client connected with id 2-1
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:530
 - Creating insecure state channel for localhost:32865.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/sdk_worker.py:537
 - State channel established.
[grpc-default-executor-0] INFO 
/usr/local/lib/python3.6/site-packages/apache_beam/runners/worker/data_plane.py:416
 - Creating client data channel for localhost:36275
[grpc-default-executor-0] INFO 
org.apache.beam.runners.fnexecution.data.GrpcDataService - Beam Fn Data client 
connected.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory - 
Closing environment urn: "beam:env:docker:v1"
payload: "\n#apachebeam/python3.6_sdk:2.19.0.dev"

[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.beam.runners.fnexecution.logging.GrpcLoggingService - 1 Beam Fn 
Logging clients still connected during shutdown.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for 
unknown endpoint.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.beam.runners.fnexecution.environment.DockerContainerEnvironment 
- Closing Docker container 
7c46325f82f967ef3d58597981a0a36d3b53df920f20a96f7481270a0901b072. Logs:
2019/12/20 02:31:22 Initializing python harness: /opt/apache/beam/boot --id=2-1 
--logging_endpoint=localhost:38497 --artifact_endpoint=localhost:39179 
--provision_endpoint=localhost:45439 --control_endpoint=localhost:33027
2019/12/20 02:31:22 Installing setup packages ...
2019/12/20 02:31:22 Found artifact: pickled_main_session
2019/12/20 02:31:22 Executing: python -m 
apache_beam.runners.worker.sdk_worker_main
2019/12/20 02:31:24 Python exited: <nil>
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
WARN org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory - 
Error cleaning up servers urn: "beam:env:docker:v1"
payload: "\n#apachebeam/python3.6_sdk:2.19.0.dev"

java.io.IOException: Received exit code 1 for command 'docker kill 
7c46325f82f967ef3d58597981a0a36d3b53df920f20a96f7481270a0901b072'. stderr: 
Error response from daemon: Cannot kill container: 
7c46325f82f967ef3d58597981a0a36d3b53df920f20a96f7481270a0901b072: Container 
7c46325f82f967ef3d58597981a0a36d3b53df920f20a96f7481270a0901b072 is not running
        at 
org.apache.beam.runners.fnexecution.environment.DockerCommand.runShortCommand(DockerCommand.java:234)
        at 
org.apache.beam.runners.fnexecution.environment.DockerCommand.runShortCommand(DockerCommand.java:168)
        at 
org.apache.beam.runners.fnexecution.environment.DockerCommand.killContainer(DockerCommand.java:148)
        at 
org.apache.beam.runners.fnexecution.environment.DockerContainerEnvironment.close(DockerContainerEnvironment.java:93)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.$closeResource(DefaultJobBundleFactory.java:476)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.close(DefaultJobBundleFactory.java:476)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.unref(DefaultJobBundleFactory.java:491)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.access$1800(DefaultJobBundleFactory.java:431)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.lambda$createEnvironmentCaches$3(DefaultJobBundleFactory.java:168)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1809)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3462)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3438)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.clear(LocalCache.java:3215)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.clear(LocalCache.java:4270)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalManualCache.invalidateAll(LocalCache.java:4909)
        at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.close(DefaultJobBundleFactory.java:258)
        at 
org.apache.beam.runners.fnexecution.control.DefaultExecutableStageContext.close(DefaultExecutableStageContext.java:43)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.closeActual(ReferenceCountingExecutableStageContextFactory.java:208)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.access$200(ReferenceCountingExecutableStageContextFactory.java:184)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory.release(ReferenceCountingExecutableStageContextFactory.java:173)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory.scheduleRelease(ReferenceCountingExecutableStageContextFactory.java:132)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory.access$300(ReferenceCountingExecutableStageContextFactory.java:44)
        at 
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.close(ReferenceCountingExecutableStageContextFactory.java:204)
        at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.$closeResource(FlinkExecutableStageFunction.java:204)
        at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.close(FlinkExecutableStageFunction.java:290)
        at 
org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
        at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:508)
        at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
        at java.lang.Thread.run(Thread.java:748)
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - MapPartition (MapPartition at 
[3]assert_that/{Group, Unkey, Match}) (1/1) (d67f6f3c79bb9d29f357065f7f310682) 
switched from RUNNING to FINISHED.
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for 
MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) switched from 
SCHEDULED to DEPLOYING.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying DataSink 
(DiscardingOutput) (1/1) (attempt #0) to 325a6eb3-98ab-4fa3-a8fd-5b61695e4b11 @ 
localhost (dataPort=-1)
[MapPartition (MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1)] 
INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem 
streams are closed for task MapPartition (MapPartition at 
[3]assert_that/{Group, Unkey, Match}) (1/1) (d67f6f3c79bb9d29f357065f7f310682) 
[FINISHED]
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Received task DataSink 
(DiscardingOutput) (1/1).
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - DataSink (DiscardingOutput) (1/1) 
(4c47851f9d2a03f66544ccf6aeb00da8) switched from CREATED to DEPLOYING.
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak 
safety net for task DataSink (DiscardingOutput) (1/1) 
(4c47851f9d2a03f66544ccf6aeb00da8) [DEPLOYING]
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and 
sending final execution state FINISHED to JobManager for task MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) 
d67f6f3c79bb9d29f357065f7f310682.
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) [DEPLOYING].
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Registering task at network: 
DataSink (DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) 
[DEPLOYING].
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - DataSink (DiscardingOutput) (1/1) 
(4c47851f9d2a03f66544ccf6aeb00da8) switched from DEPLOYING to RUNNING.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - MapPartition 
(MapPartition at [3]assert_that/{Group, Unkey, Match}) (1/1) 
(d67f6f3c79bb9d29f357065f7f310682) switched from RUNNING to FINISHED.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) switched from 
DEPLOYING to RUNNING.
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - DataSink (DiscardingOutput) (1/1) 
(4c47851f9d2a03f66544ccf6aeb00da8) switched from RUNNING to FINISHED.
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Freeing task resources for DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8).
[DataSink (DiscardingOutput) (1/1)] INFO 
org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are 
closed for task DataSink (DiscardingOutput) (1/1) 
(4c47851f9d2a03f66544ccf6aeb00da8) [FINISHED]
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and 
sending final execution state FINISHED to JobManager for task DataSink 
(DiscardingOutput) 4c47851f9d2a03f66544ccf6aeb00da8.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(DiscardingOutput) (1/1) (4c47851f9d2a03f66544ccf6aeb00da8) switched from 
RUNNING to FINISHED.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
BeamApp-jenkins-1220023056-759a7d82 (bf7cb94c1f5be340a900c1ba1f6fade3) switched 
from state RUNNING to FINISHED.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Job 
bf7cb94c1f5be340a900c1ba1f6fade3 reached globally terminal state FINISHED.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Stopping the JobMaster for job 
BeamApp-jenkins-1220023056-759a7d82(bf7cb94c1f5be340a900c1ba1f6fade3).
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTable - Free slot 
TaskSlot(index:0, state:ACTIVE, resource profile: 
ResourceProfile{cpuCores=1.7976931348623157E308, heapMemoryInMB=2147483647, 
directMemoryInMB=2147483647, nativeMemoryInMB=2147483647, 
networkMemoryInMB=2147483647, managedMemoryInMB=16277}, allocationId: 
2bff3e356d5fa06ecba7db27d5bb99dd, jobId: bf7cb94c1f5be340a900c1ba1f6fade3).
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.JobLeaderService - Remove job 
bf7cb94c1f5be340a900c1ba1f6fade3 from job leader monitoring.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Suspending SlotPool.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Close ResourceManager connection 
948f60e2aa560440d425bdefed83fd40: JobManager is shutting down..
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Close JobManager 
connection for job bf7cb94c1f5be340a900c1ba1f6fade3.
[flink-akka.actor.default-dispatcher-8] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Stopping SlotPool.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Disconnect 
job manager 85ff351c2f0cd2549f44bef8c3f543df@akka://flink/user/jobmanager_1 for 
job bf7cb94c1f5be340a900c1ba1f6fade3 from the resource manager.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Close JobManager 
connection for job bf7cb94c1f5be340a900c1ba1f6fade3.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.JobLeaderService - Cannot reconnect to 
job bf7cb94c1f5be340a900c1ba1f6fade3 because it is not registered.

> Task :sdks:python:container:py37:docker
Removing intermediate container 753dcc413b04
 ---> bc2f5ad152e2
Step 11/13 : RUN pip freeze --all
 ---> Running in faaa9f8bfb13
absl-py==0.9.0
apache-beam==2.19.0.dev0
astor==0.8.1
avro-python3==1.8.2
cachetools==3.1.1
certifi==2019.11.28
chardet==3.0.4
crcmod==1.7
Cython==0.29.10
dill==0.3.1.1
docopt==0.6.2
fastavro==0.21.24
fasteners==0.15
future==0.17.1
gast==0.3.2
google-api-core==1.15.0
google-apitools==0.5.28
google-auth==1.10.0
google-cloud-bigquery==1.17.0
google-cloud-bigtable==0.32.1
google-cloud-core==1.0.2
google-cloud-datastore==1.7.4
google-cloud-pubsub==0.39.1
google-pasta==0.1.8
google-resumable-media==0.5.0
googleapis-common-protos==1.6.0
grpc-google-iam-v1==0.11.4
grpcio==1.22.0
h5py==2.10.0
hdfs==2.5.6
httplib2==0.12.0
idna==2.8
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
Markdown==3.1.1
mock==2.0.0
monotonic==1.5
nose==1.3.7
numpy==1.16.4
oauth2client==3.0.0
pandas==0.24.2
pbr==5.4.4
pip==19.3.1
proto-google-cloud-datastore-v1==0.90.4
protobuf==3.9.0
protorpc==0.12.0
pyarrow==0.15.1
pyasn1==0.4.8
pyasn1-modules==0.2.7
pydot==1.4.1
PyHamcrest==1.9.0
pymongo==3.8.0
pyparsing==2.4.5
python-dateutil==2.8.1
python-gflags==3.0.6
python-snappy==0.5.4
pytz==2019.1
PyYAML==5.1
requests==2.22.0
rsa==4.0
scipy==1.2.2
setuptools==41.6.0
six==1.13.0
tenacity==6.0.0
tensorboard==1.14.0
tensorflow==1.14.0
tensorflow-estimator==1.14.0
termcolor==1.1.0
typing==3.7.4.1
typing-extensions==3.7.4.1
urllib3==1.25.7
Werkzeug==0.16.0
wheel==0.33.6
wrapt==1.11.2
Removing intermediate container faaa9f8bfb13
 ---> b87224683c54
Step 12/13 : ADD target/launcher/linux_amd64/boot /opt/apache/beam/
 ---> 6232cc49dbd0
Step 13/13 : ENTRYPOINT ["/opt/apache/beam/boot"]
 ---> Running in 6fbc3278d8de
Removing intermediate container 6fbc3278d8de
 ---> 724d166fa68b
Successfully built 724d166fa68b
Successfully tagged apachebeam/python3.7_sdk:2.19.0.dev
FATAL: command execution failed
hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on 
JNLP4-connect connection from 
165.171.154.104.bc.googleusercontent.com/104.154.171.165:42092 failed. The 
channel is closing down or has closed down
        at hudson.remoting.Channel.call(Channel.java:950)
        at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
        at com.sun.proxy.$Proxy145.isAlive(Unknown Source)
        at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1150)
        at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1142)
        at hudson.Launcher$ProcStarter.join(Launcher.java:470)
        at hudson.plugins.gradle.Gradle.perform(Gradle.java:317)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
        at hudson.model.Build$BuildExecution.build(Build.java:206)
        at hudson.model.Build$BuildExecution.doRun(Build.java:163)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
        at hudson.model.Run.execute(Run.java:1815)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
Caused by: java.nio.channels.ClosedChannelException
        at 
org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
        at 
org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
        at 
org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
        at 
org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
        at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
        at 
org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:784)
        at 
org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
        at 
org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:314)
        at hudson.remoting.Channel.close(Channel.java:1452)
        at hudson.remoting.Channel.close(Channel.java:1405)
        at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:847)
        at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:108)
        at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:756)
        at 
jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
        at 
jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
ERROR: apache-beam-jenkins-12 is offline; cannot locate JDK 1.8 (latest)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to