See
<https://ci-beam.apache.org/job/beam_PostCommit_Python_VR_Spark/3645/display/redirect>
Changes:
------------------------------------------
[...truncated 2.68 MB...]
at
org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:157)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 3 more
ERROR:root:java.lang.RuntimeException: Error received from SDK harness for
instruction 3: Traceback (most recent call last):
File "apache_beam/runners/worker/sdk_worker.py", line 258, in _execute
response = task()
File "apache_beam/runners/worker/sdk_worker.py", line 315, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "apache_beam/runners/worker/sdk_worker.py", line 484, in do_instruction
getattr(request, request_type), request.instruction_id)
File "apache_beam/runners/worker/sdk_worker.py", line 519, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "apache_beam/runners/worker/bundle_processor.py", line 985, in
process_bundle
element.data)
File "apache_beam/runners/worker/bundle_processor.py", line 221, in
process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 356, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "apache_beam/runners/worker/operations.py", line 218, in receive
self.consumer.process(windowed_value)
File "apache_beam/runners/worker/operations.py", line 819, in process
o)
File "apache_beam/runners/common.py", line 1224, in
process_with_sized_restriction
watermark_estimator_state=estimator_state)
File "apache_beam/runners/common.py", line 723, in invoke_process
windowed_value, additional_args, additional_kwargs)
File "apache_beam/runners/common.py", line 872, in _invoke_process_per_window
self.threadsafe_restriction_tracker.check_done()
File "apache_beam/runners/sdf_utils.py", line 115, in check_done
return self._restriction_tracker.check_done()
File "apache_beam/io/restriction_trackers.py", line 106, in check_done
self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has
not been claimed.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to FAILED
.ssssINFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at
localhost:33165
WARNING:root:Make sure that locally built Python SDK docker image has Python
2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.25.0.dev.
If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
<function lift_combiners at 0x7f4d685b5a28> ====================
20/09/13 18:05:26 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Staging
artifacts for job_b48e9ead-e328-44ac-8039-23630cee6bbe.
20/09/13 18:05:26 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Resolving
artifacts for
job_b48e9ead-e328-44ac-8039-23630cee6bbe.ref_Environment_default_environment_1.
20/09/13 18:05:26 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Getting 0
artifacts for job_b48e9ead-e328-44ac-8039-23630cee6bbe.null.
20/09/13 18:05:26 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Artifacts
fully staged for job_b48e9ead-e328-44ac-8039-23630cee6bbe.
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkJobInvoker: Invoking
job
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6
20/09/13 18:05:26 INFO org.apache.beam.runners.jobsubmission.JobInvocation:
Starting job invocation
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has
started a component necessary for the execution. Be sure to run the pipeline
using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
RUNNING
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
PipelineOptions.filesToStage was not specified. Defaulting to files from the
classpath
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Will
stage 7 files. (Enable logging at DEBUG level to see which files will be
staged.)
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
Running job
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6
on Spark master local
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
Running job
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6
on Spark master local
20/09/13 18:05:26 WARN
org.apache.beam.runners.spark.translation.GroupNonMergingWindowsFunctions:
Either coder LengthPrefixCoder(ByteArrayCoder) or GlobalWindow$Coder is not
consistent with equals. That might cause issues on some runners.
20/09/13 18:05:26 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Job
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6:
Pipeline translated successfully. Computing outputs
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel
for localhost:45109.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with
unbounded number of workers.
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService: Beam
Fn Control client connected with id 32-1
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 32-2
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for
localhost:36675.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for
localhost:42839
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.data.GrpcDataService: Beam Fn Data client
connected.
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 32-3
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 32-4
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 32-5
20/09/13 18:05:27 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 32-6
20/09/13 18:05:27 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Job
test_windowed_pardo_state_timers_1600020326.26_0d81d770-d115-42dd-802f-541ad8c532b6
finished.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at
localhost:34783
WARNING:root:Make sure that locally built Python SDK docker image has Python
2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.25.0.dev.
If the image is not available at local, we will try to pull from hub.docker.com
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory: Closing
environment urn: "beam:env:external:v1"
payload: "\n\021\n\017localhost:35677"
capabilities: "beam:coder:varint:v1"
capabilities: "beam:coder:bytes:v1"
capabilities: "beam:coder:timer:v1"
capabilities: "beam:coder:global_window:v1"
capabilities: "beam:coder:interval_window:v1"
capabilities: "beam:coder:iterable:v1"
capabilities: "beam:coder:state_backed_iterable:v1"
capabilities: "beam:coder:windowed_value:v1"
capabilities: "beam:coder:param_windowed_value:v1"
capabilities: "beam:coder:double:v1"
capabilities: "beam:coder:string_utf8:v1"
capabilities: "beam:coder:length_prefix:v1"
capabilities: "beam:coder:bool:v1"
capabilities: "beam:coder:kv:v1"
capabilities: "beam:coder:row:v1"
capabilities: "beam:protocol:progress_reporting:v0"
capabilities: "beam:protocol:worker_status:v1"
capabilities: "beam:combinefn:packed_python:v1"
capabilities: "beam:version:sdk_base:apache/beam_python2.7_sdk:2.25.0.dev"
20/09/13 18:05:28 WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer:
Hanged up for unknown endpoint.
20/09/13 18:05:28 WARN
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory: Error
cleaning up servers urn: "beam:env:external:v1"
payload: "\n\021\n\017localhost:35677"
capabilities: "beam:coder:varint:v1"
capabilities: "beam:coder:bytes:v1"
capabilities: "beam:coder:timer:v1"
capabilities: "beam:coder:global_window:v1"
capabilities: "beam:coder:interval_window:v1"
capabilities: "beam:coder:iterable:v1"
capabilities: "beam:coder:state_backed_iterable:v1"
capabilities: "beam:coder:windowed_value:v1"
capabilities: "beam:coder:param_windowed_value:v1"
capabilities: "beam:coder:double:v1"
capabilities: "beam:coder:string_utf8:v1"
capabilities: "beam:coder:length_prefix:v1"
capabilities: "beam:coder:bool:v1"
capabilities: "beam:coder:kv:v1"
capabilities: "beam:coder:row:v1"
capabilities: "beam:protocol:progress_reporting:v0"
capabilities: "beam:protocol:worker_status:v1"
capabilities: "beam:combinefn:packed_python:v1"
capabilities: "beam:version:sdk_base:apache/beam_python2.7_sdk:2.25.0.dev"
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.StatusRuntimeException:
UNAVAILABLE: io exception
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:240)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:221)
at
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:140)
at
org.apache.beam.model.fnexecution.v1.BeamFnExternalWorkerPoolGrpc$BeamFnExternalWorkerPoolBlockingStub.stopWorker(BeamFnExternalWorkerPoolGrpc.java:247)
at
org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory$1.close(ExternalEnvironmentFactory.java:159)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.$closeResource(DefaultJobBundleFactory.java:629)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.close(DefaultJobBundleFactory.java:629)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.unref(DefaultJobBundleFactory.java:645)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$WrappedSdkHarnessClient.access$400(DefaultJobBundleFactory.java:576)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.lambda$createEnvironmentCaches$3(DefaultJobBundleFactory.java:208)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.processPendingNotifications(LocalCache.java:1809)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.runUnlockedCleanup(LocalCache.java:3462)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.postWriteCleanup(LocalCache.java:3438)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.clear(LocalCache.java:3215)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.clear(LocalCache.java:4270)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalManualCache.invalidateAll(LocalCache.java:4909)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.close(DefaultJobBundleFactory.java:315)
at
org.apache.beam.runners.fnexecution.control.DefaultExecutableStageContext.close(DefaultExecutableStageContext.java:43)
at
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.closeActual(ReferenceCountingExecutableStageContextFactory.java:209)
at
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.access$200(ReferenceCountingExecutableStageContextFactory.java:185)
at
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory.release(ReferenceCountingExecutableStageContextFactory.java:174)
at
org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory.lambda$scheduleRelease$1(ReferenceCountingExecutableStageContextFactory.java:128)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by:
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.AbstractChannel$AnnotatedConnectException:
Connection refused: localhost/127.0.0.1:35677
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:688)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
org.apache.beam.vendor.grpc.v1p26p0.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
INFO:apache_beam.runners.worker.sdk_worker:No more requests from control plane
INFO:apache_beam.runners.worker.sdk_worker:SDK Harness waiting for in-flight
requests to complete
INFO:apache_beam.runners.worker.data_plane:Closing all cached grpc data
channels.
INFO:apache_beam.runners.worker.sdk_worker:Closing all cached gRPC state
handlers.
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
<function lift_combiners at 0x7f4d685b5a28> ====================
INFO:apache_beam.runners.worker.sdk_worker:Done consuming work.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Staging
artifacts for job_47fead3f-4e22-4a85-9320-5ea045d97677.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Resolving
artifacts for
job_47fead3f-4e22-4a85-9320-5ea045d97677.ref_Environment_default_environment_1.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Getting 0
artifacts for job_47fead3f-4e22-4a85-9320-5ea045d97677.null.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.artifact.ArtifactStagingService: Artifacts
fully staged for job_47fead3f-4e22-4a85-9320-5ea045d97677.
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkJobInvoker: Invoking
job test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991
20/09/13 18:05:28 INFO org.apache.beam.runners.jobsubmission.JobInvocation:
Starting job invocation
test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has
started a component necessary for the execution. Be sure to run the pipeline
using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to
RUNNING
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
PipelineOptions.filesToStage was not specified. Defaulting to files from the
classpath
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Will
stage 7 files. (Enable logging at DEBUG level to see which files will be
staged.)
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
Running job test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991
on Spark master local
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
Running job test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991
on Spark master local
20/09/13 18:05:28 WARN
org.apache.beam.runners.spark.translation.GroupNonMergingWindowsFunctions:
Either coder LengthPrefixCoder(ByteArrayCoder) or GlobalWindow$Coder is not
consistent with equals. That might cause issues on some runners.
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Job
test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991: Pipeline
translated successfully. Computing outputs
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel
for localhost:45143.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with
unbounded number of workers.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService: Beam
Fn Control client connected with id 33-1
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 33-2
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for
localhost:45451.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for
localhost:39477
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.data.GrpcDataService: Beam Fn Data client
connected.
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 33-3
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 33-4
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 33-5
20/09/13 18:05:28 INFO
org.apache.beam.runners.fnexecution.control.FnApiControlClientPoolService:
getProcessBundleDescriptor request with id 33-6
20/09/13 18:05:28 INFO org.apache.beam.runners.spark.SparkPipelineRunner: Job
test_windowing_1600020327.53_ee256128-04bd-443f-aa9c-ecc7defce991 finished.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.
----------------------------------------------------------------------
Ran 46 tests in 73.864s
OK (skipped=14)
> Task :sdks:python:test-suites:portable:py2:sparkCompatibilityMatrixLoopback
> Task :sdks:python:test-suites:portable:py2:sparkValidatesRunner
FAILURE: Build failed with an exception.
* Where:
Script
'<https://ci-beam.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/common.gradle'>
line: 140
* What went wrong:
Execution failed for task
':sdks:python:test-suites:portable:py36:createProcessWorker'.
> Process 'command 'sh'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See
https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 5m 7s
77 actionable tasks: 62 executed, 15 from cache
Publishing build scan...
https://gradle.com/s/3kjnyaacwqubu
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]