See 
<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/1510/display/redirect>

Changes:


------------------------------------------
[...truncated 1.58 MB...]
19/11/13 12:15:17 INFO close: Closing all cached grpc data channels.
19/11/13 12:15:17 INFO close: Closing all cached gRPC state handlers.
19/11/13 12:15:17 INFO run: Done consuming work.
19/11/13 12:15:17 INFO main: Python sdk harness exiting.
19/11/13 12:15:17 INFO GrpcLoggingService: Logging client hanged up.
19/11/13 12:15:17 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:17 INFO Executor: Finished task 0.0 in stage 126.0 (TID 153). 
12763 bytes result sent to driver
19/11/13 12:15:17 INFO TaskSetManager: Finished task 0.0 in stage 126.0 (TID 
153) in 867 ms on localhost (executor driver) (1/1)
19/11/13 12:15:17 INFO TaskSchedulerImpl: Removed TaskSet 126.0, whose tasks 
have all completed, from pool 
19/11/13 12:15:17 INFO DAGScheduler: ShuffleMapStage 126 (mapToPair at 
GroupCombineFunctions.java:55) finished in 0.874 s
19/11/13 12:15:17 INFO DAGScheduler: looking for newly runnable stages
19/11/13 12:15:17 INFO DAGScheduler: running: Set()
19/11/13 12:15:17 INFO DAGScheduler: waiting: Set(ShuffleMapStage 127, 
ResultStage 128)
19/11/13 12:15:17 INFO DAGScheduler: failed: Set()
19/11/13 12:15:17 INFO DAGScheduler: Submitting ShuffleMapStage 127 
(MapPartitionsRDD[872] at flatMapToPair at 
GroupNonMergingWindowsFunctions.java:115), which has no missing parents
19/11/13 12:15:17 INFO MemoryStore: Block broadcast_125 stored as values in 
memory (estimated size 57.2 KB, free 13.5 GB)
19/11/13 12:15:17 INFO MemoryStore: Block broadcast_125_piece0 stored as bytes 
in memory (estimated size 22.9 KB, free 13.5 GB)
19/11/13 12:15:17 INFO BlockManagerInfo: Added broadcast_125_piece0 in memory 
on localhost:38225 (size: 22.9 KB, free: 13.5 GB)
19/11/13 12:15:17 INFO SparkContext: Created broadcast 125 from broadcast at 
DAGScheduler.scala:1161
19/11/13 12:15:17 INFO DAGScheduler: Submitting 2 missing tasks from 
ShuffleMapStage 127 (MapPartitionsRDD[872] at flatMapToPair at 
GroupNonMergingWindowsFunctions.java:115) (first 15 tasks are for partitions 
Vector(0, 1))
19/11/13 12:15:17 INFO TaskSchedulerImpl: Adding task set 127.0 with 2 tasks
19/11/13 12:15:17 INFO TaskSetManager: Starting task 1.0 in stage 127.0 (TID 
154, localhost, executor driver, partition 1, NODE_LOCAL, 7760 bytes)
19/11/13 12:15:17 INFO Executor: Running task 1.0 in stage 127.0 (TID 154)
19/11/13 12:15:17 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks 
including 1 local blocks and 0 remote blocks
19/11/13 12:15:17 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 
0 ms
19/11/13 12:15:17 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST
19/11/13 12:15:17 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST -> 0 
artifacts
19/11/13 12:15:17 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/13 12:15:17 INFO main: Logging handler created.
19/11/13 12:15:17 INFO start: Status HTTP server running at localhost:37087
19/11/13 12:15:17 INFO main: semi_persistent_directory: /tmp
19/11/13 12:15:17 WARN _load_main_session: No session file found: 
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive 
session) may fail. 
19/11/13 12:15:17 WARN get_all_options: Discarding unparseable args: 
[u'--job_server_timeout=60', 
u'--app_name=test_windowing_1573647314.4_183b201c-7136-4f40-bd60-1918c9bfce92', 
u'--direct_runner_use_stacked_bundle', u'--spark_master=local', 
u'--options_id=28', u'--enable_spark_metric_sinks', u'--pipeline_type_check'] 
19/11/13 12:15:17 INFO main: Python sdk harness started with pipeline_options: 
{'runner': u'None', 'experiments': [u'beam_fn_api'], 
'environment_cache_millis': u'0', 'environment_type': u'PROCESS', 
'sdk_location': u'container', 'job_name': u'test_windowing_1573647314.4', 
'environment_config': u'{"command": 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
 'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:57751'}
19/11/13 12:15:17 INFO __init__: Creating state cache with size 0
19/11/13 12:15:17 INFO __init__: Creating insecure control channel for 
localhost:35677.
19/11/13 12:15:17 INFO __init__: Control channel established.
19/11/13 12:15:17 INFO FnApiControlClientPoolService: Beam Fn Control client 
connected with id 256-1
19/11/13 12:15:17 INFO __init__: Initializing SDKHarness with 12 workers.
19/11/13 12:15:17 INFO create_state_handler: Creating insecure state channel 
for localhost:41057.
19/11/13 12:15:17 INFO create_state_handler: State channel established.
19/11/13 12:15:17 INFO create_data_channel: Creating client data channel for 
localhost:46699
19/11/13 12:15:17 INFO GrpcDataService: Beam Fn Data client connected.
19/11/13 12:15:17 INFO DefaultJobBundleFactory: Closing environment urn: 
"beam:env:process:v1"
payload: 
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh";>

19/11/13 12:15:17 INFO run: No more requests from control plane
19/11/13 12:15:17 INFO run: SDK Harness waiting for in-flight requests to 
complete
19/11/13 12:15:17 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:17 INFO close: Closing all cached grpc data channels.
19/11/13 12:15:17 INFO close: Closing all cached gRPC state handlers.
19/11/13 12:15:17 INFO run: Done consuming work.
19/11/13 12:15:17 INFO main: Python sdk harness exiting.
19/11/13 12:15:17 INFO GrpcLoggingService: Logging client hanged up.
19/11/13 12:15:18 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:18 INFO Executor: Finished task 1.0 in stage 127.0 (TID 154). 
15229 bytes result sent to driver
19/11/13 12:15:18 INFO TaskSetManager: Starting task 0.0 in stage 127.0 (TID 
155, localhost, executor driver, partition 0, PROCESS_LOCAL, 7977 bytes)
19/11/13 12:15:18 INFO Executor: Running task 0.0 in stage 127.0 (TID 155)
19/11/13 12:15:18 INFO TaskSetManager: Finished task 1.0 in stage 127.0 (TID 
154) in 909 ms on localhost (executor driver) (1/2)
19/11/13 12:15:18 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST
19/11/13 12:15:18 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST -> 0 
artifacts
19/11/13 12:15:18 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/13 12:15:18 INFO main: Logging handler created.
19/11/13 12:15:18 INFO start: Status HTTP server running at localhost:37283
19/11/13 12:15:18 INFO main: semi_persistent_directory: /tmp
19/11/13 12:15:18 WARN _load_main_session: No session file found: 
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive 
session) may fail. 
19/11/13 12:15:18 WARN get_all_options: Discarding unparseable args: 
[u'--job_server_timeout=60', 
u'--app_name=test_windowing_1573647314.4_183b201c-7136-4f40-bd60-1918c9bfce92', 
u'--direct_runner_use_stacked_bundle', u'--spark_master=local', 
u'--options_id=28', u'--enable_spark_metric_sinks', u'--pipeline_type_check'] 
19/11/13 12:15:18 INFO main: Python sdk harness started with pipeline_options: 
{'runner': u'None', 'experiments': [u'beam_fn_api'], 
'environment_cache_millis': u'0', 'environment_type': u'PROCESS', 
'sdk_location': u'container', 'job_name': u'test_windowing_1573647314.4', 
'environment_config': u'{"command": 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
 'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:57751'}
19/11/13 12:15:18 INFO __init__: Creating state cache with size 0
19/11/13 12:15:18 INFO __init__: Creating insecure control channel for 
localhost:42123.
19/11/13 12:15:18 INFO __init__: Control channel established.
19/11/13 12:15:18 INFO FnApiControlClientPoolService: Beam Fn Control client 
connected with id 257-1
19/11/13 12:15:18 INFO __init__: Initializing SDKHarness with 12 workers.
19/11/13 12:15:18 INFO create_state_handler: Creating insecure state channel 
for localhost:37811.
19/11/13 12:15:18 INFO create_state_handler: State channel established.
19/11/13 12:15:18 INFO create_data_channel: Creating client data channel for 
localhost:41477
19/11/13 12:15:18 INFO GrpcDataService: Beam Fn Data client connected.
19/11/13 12:15:18 INFO DefaultJobBundleFactory: Closing environment urn: 
"beam:env:process:v1"
payload: 
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh";>

19/11/13 12:15:18 INFO run: No more requests from control plane
19/11/13 12:15:18 INFO run: SDK Harness waiting for in-flight requests to 
complete
19/11/13 12:15:18 INFO close: Closing all cached grpc data channels.
19/11/13 12:15:18 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:18 INFO close: Closing all cached gRPC state handlers.
19/11/13 12:15:18 INFO run: Done consuming work.
19/11/13 12:15:18 INFO main: Python sdk harness exiting.
19/11/13 12:15:18 INFO GrpcLoggingService: Logging client hanged up.
19/11/13 12:15:18 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:18 INFO Executor: Finished task 0.0 in stage 127.0 (TID 155). 
13710 bytes result sent to driver
19/11/13 12:15:18 INFO TaskSetManager: Finished task 0.0 in stage 127.0 (TID 
155) in 836 ms on localhost (executor driver) (2/2)
19/11/13 12:15:18 INFO TaskSchedulerImpl: Removed TaskSet 127.0, whose tasks 
have all completed, from pool 
19/11/13 12:15:18 INFO DAGScheduler: ShuffleMapStage 127 (flatMapToPair at 
GroupNonMergingWindowsFunctions.java:115) finished in 1.751 s
19/11/13 12:15:18 INFO DAGScheduler: looking for newly runnable stages
19/11/13 12:15:18 INFO DAGScheduler: running: Set()
19/11/13 12:15:18 INFO DAGScheduler: waiting: Set(ResultStage 128)
19/11/13 12:15:18 INFO DAGScheduler: failed: Set()
19/11/13 12:15:18 INFO DAGScheduler: Submitting ResultStage 128 
(EmptyOutputSink_0 MapPartitionsRDD[877] at flatMap at 
SparkBatchPortablePipelineTranslator.java:311), which has no missing parents
19/11/13 12:15:18 INFO MemoryStore: Block broadcast_126 stored as values in 
memory (estimated size 26.1 KB, free 13.5 GB)
19/11/13 12:15:18 INFO MemoryStore: Block broadcast_126_piece0 stored as bytes 
in memory (estimated size 12.4 KB, free 13.5 GB)
19/11/13 12:15:18 INFO BlockManagerInfo: Added broadcast_126_piece0 in memory 
on localhost:38225 (size: 12.4 KB, free: 13.5 GB)
19/11/13 12:15:18 INFO SparkContext: Created broadcast 126 from broadcast at 
DAGScheduler.scala:1161
19/11/13 12:15:18 INFO DAGScheduler: Submitting 1 missing tasks from 
ResultStage 128 (EmptyOutputSink_0 MapPartitionsRDD[877] at flatMap at 
SparkBatchPortablePipelineTranslator.java:311) (first 15 tasks are for 
partitions Vector(0))
19/11/13 12:15:18 INFO TaskSchedulerImpl: Adding task set 128.0 with 1 tasks
19/11/13 12:15:18 INFO TaskSetManager: Starting task 0.0 in stage 128.0 (TID 
156, localhost, executor driver, partition 0, NODE_LOCAL, 7662 bytes)
19/11/13 12:15:18 INFO Executor: Running task 0.0 in stage 128.0 (TID 156)
19/11/13 12:15:18 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks 
including 2 local blocks and 0 remote blocks
19/11/13 12:15:18 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 
0 ms
19/11/13 12:15:18 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST
19/11/13 12:15:18 INFO AbstractArtifactRetrievalService: GetManifest for 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST -> 0 
artifacts
19/11/13 12:15:19 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/13 12:15:19 INFO main: Logging handler created.
19/11/13 12:15:19 INFO start: Status HTTP server running at localhost:34811
19/11/13 12:15:19 INFO main: semi_persistent_directory: /tmp
19/11/13 12:15:19 WARN _load_main_session: No session file found: 
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive 
session) may fail. 
19/11/13 12:15:19 WARN get_all_options: Discarding unparseable args: 
[u'--job_server_timeout=60', 
u'--app_name=test_windowing_1573647314.4_183b201c-7136-4f40-bd60-1918c9bfce92', 
u'--direct_runner_use_stacked_bundle', u'--spark_master=local', 
u'--options_id=28', u'--enable_spark_metric_sinks', u'--pipeline_type_check'] 
19/11/13 12:15:19 INFO main: Python sdk harness started with pipeline_options: 
{'runner': u'None', 'experiments': [u'beam_fn_api'], 
'environment_cache_millis': u'0', 'environment_type': u'PROCESS', 
'sdk_location': u'container', 'job_name': u'test_windowing_1573647314.4', 
'environment_config': u'{"command": 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
 'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:57751'}
19/11/13 12:15:19 INFO __init__: Creating state cache with size 0
19/11/13 12:15:19 INFO __init__: Creating insecure control channel for 
localhost:42953.
19/11/13 12:15:19 INFO __init__: Control channel established.
19/11/13 12:15:19 INFO __init__: Initializing SDKHarness with 12 workers.
19/11/13 12:15:19 INFO FnApiControlClientPoolService: Beam Fn Control client 
connected with id 258-1
19/11/13 12:15:19 INFO create_state_handler: Creating insecure state channel 
for localhost:46033.
19/11/13 12:15:19 INFO create_state_handler: State channel established.
19/11/13 12:15:19 INFO create_data_channel: Creating client data channel for 
localhost:42635
19/11/13 12:15:19 INFO GrpcDataService: Beam Fn Data client connected.
19/11/13 12:15:19 INFO DefaultJobBundleFactory: Closing environment urn: 
"beam:env:process:v1"
payload: 
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh";>

19/11/13 12:15:19 INFO run: No more requests from control plane
19/11/13 12:15:19 INFO run: SDK Harness waiting for in-flight requests to 
complete
19/11/13 12:15:19 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:19 INFO close: Closing all cached grpc data channels.
19/11/13 12:15:19 INFO close: Closing all cached gRPC state handlers.
19/11/13 12:15:19 INFO run: Done consuming work.
19/11/13 12:15:19 INFO main: Python sdk harness exiting.
19/11/13 12:15:19 INFO GrpcLoggingService: Logging client hanged up.
19/11/13 12:15:19 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown 
endpoint.
19/11/13 12:15:19 INFO Executor: Finished task 0.0 in stage 128.0 (TID 156). 
11970 bytes result sent to driver
19/11/13 12:15:19 INFO TaskSetManager: Finished task 0.0 in stage 128.0 (TID 
156) in 889 ms on localhost (executor driver) (1/1)
19/11/13 12:15:19 INFO TaskSchedulerImpl: Removed TaskSet 128.0, whose tasks 
have all completed, from pool 
19/11/13 12:15:19 INFO DAGScheduler: ResultStage 128 (foreach at 
BoundedDataset.java:124) finished in 0.896 s
19/11/13 12:15:19 INFO DAGScheduler: Job 45 finished: foreach at 
BoundedDataset.java:124, took 4.312451 s
19/11/13 12:15:19 INFO SparkPipelineRunner: Job 
test_windowing_1573647314.4_183b201c-7136-4f40-bd60-1918c9bfce92 finished.
19/11/13 12:15:19 WARN SparkPipelineResult$BatchMode: Collecting monitoring 
infos is not implemented yet in Spark portable runner.
19/11/13 12:15:19 INFO AbstractArtifactRetrievalService: Manifest at 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/MANIFEST has 0 
artifact locations
19/11/13 12:15:19 INFO BeamFileSystemArtifactStagingService: Removed dir 
/tmp/sparktest5vhMrR/job_d31045a6-4828-4ba1-b2fe-54171c6a2763/
INFO:root:Job state changed to DONE
.
======================================================================
ERROR: test_pardo_state_with_custom_key_coder (__main__.SparkRunnerTest)
Tests that state requests work correctly when the key coder is an
----------------------------------------------------------------------
Traceback (most recent call last):
  File "apache_beam/runners/portability/portable_runner_test.py", line 232, in 
test_pardo_state_with_custom_key_coder
    equal_to(expected))
  File "apache_beam/pipeline.py", line 436, in __exit__
    self.run().wait_until_finish()
  File "apache_beam/runners/portability/portable_runner.py", line 435, in 
wait_until_finish
    for state_response in self._state_stream:
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py";,>
 line 395, in next
    return self._next()
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py";,>
 line 552, in _next
    _common.wait(self._state.condition.wait, _response_ready)
==================== Timed out after 60 seconds. ====================

# Thread: <Thread(wait_until_finish_read, started daemon 140396354582272)>

# Thread: <Thread(Thread-119, started daemon 140396346189568)>

  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py";,>
 line 140, in wait
    _wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py";,>
 line 105, in _wait_once
    wait_fn(timeout=timeout)
  File "/usr/lib/python2.7/threading.py", line 359, in wait
    _sleep(delay)
  File "apache_beam/runners/portability/portable_runner_test.py", line 73, in 
handler
    raise BaseException(msg)
BaseException: Timed out after 60 seconds.

======================================================================
# Thread: <_MainThread(MainThread, started 140397142206208)>
==================== Timed out after 60 seconds. ====================

ERROR: test_pardo_timers (__main__.SparkRunnerTest)
# Thread: <Thread(wait_until_finish_read, started daemon 140396328879872)>

----------------------------------------------------------------------
Traceback (most recent call last):
# Thread: <Thread(Thread-125, started daemon 140396337272576)>

  File "apache_beam/runners/portability/fn_api_runner_test.py", line 323, in 
test_pardo_timers
    assert_that(actual, equal_to(expected))
  File "apache_beam/pipeline.py", line 436, in __exit__
    self.run().wait_until_finish()
  File "apache_beam/runners/portability/portable_runner.py", line 435, in 
wait_until_finish
    for state_response in self._state_stream:
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py";,>
 line 395, in next
    return self._next()
# Thread: <Thread(Thread-119, started daemon 140396346189568)>

# Thread: <_MainThread(MainThread, started 140397142206208)>

# Thread: <Thread(wait_until_finish_read, started daemon 140396354582272)>
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py";,>
 line 552, in _next
    _common.wait(self._state.condition.wait, _response_ready)
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py";,>
 line 140, in wait
    _wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
  File 
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py";,>
 line 105, in _wait_once
    wait_fn(timeout=timeout)
  File "/usr/lib/python2.7/threading.py", line 359, in wait
    _sleep(delay)
  File "apache_beam/runners/portability/portable_runner_test.py", line 73, in 
handler
    raise BaseException(msg)
BaseException: Timed out after 60 seconds.

----------------------------------------------------------------------
Ran 36 tests in 300.843s

FAILED (errors=2, skipped=9)

> Task :sdks:python:test-suites:portable:py2:sparkValidatesRunner FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build.gradle'>
 line: 198

* What went wrong:
Execution failed for task 
':sdks:python:test-suites:portable:py2:sparkValidatesRunner'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See 
https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 35s
59 actionable tasks: 46 executed, 13 from cache

Publishing build scan...
https://gradle.com/s/bzk4icn5z665e

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to