See
<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/1570/display/redirect>
Changes:
------------------------------------------
[...truncated 1.67 MB...]
19/11/19 12:10:09 INFO TaskSchedulerImpl: Removed TaskSet 131.0, whose tasks
have all completed, from pool
19/11/19 12:10:09 INFO DAGScheduler: ShuffleMapStage 131 (mapToPair at
GroupCombineFunctions.java:55) finished in 0.879 s
19/11/19 12:10:09 INFO DAGScheduler: looking for newly runnable stages
19/11/19 12:10:09 INFO DAGScheduler: running: Set()
19/11/19 12:10:09 INFO DAGScheduler: waiting: Set(ShuffleMapStage 132,
ResultStage 133)
19/11/19 12:10:09 INFO DAGScheduler: failed: Set()
19/11/19 12:10:09 INFO DAGScheduler: Submitting ShuffleMapStage 132
(MapPartitionsRDD[911] at flatMapToPair at
GroupNonMergingWindowsFunctions.java:115), which has no missing parents
19/11/19 12:10:09 INFO MemoryStore: Block broadcast_129 stored as values in
memory (estimated size 57.2 KB, free 13.5 GB)
19/11/19 12:10:09 INFO MemoryStore: Block broadcast_129_piece0 stored as bytes
in memory (estimated size 22.0 KB, free 13.5 GB)
19/11/19 12:10:09 INFO BlockManagerInfo: Added broadcast_129_piece0 in memory
on localhost:40511 (size: 22.0 KB, free: 13.5 GB)
19/11/19 12:10:09 INFO SparkContext: Created broadcast 129 from broadcast at
DAGScheduler.scala:1161
19/11/19 12:10:09 INFO DAGScheduler: Submitting 2 missing tasks from
ShuffleMapStage 132 (MapPartitionsRDD[911] at flatMapToPair at
GroupNonMergingWindowsFunctions.java:115) (first 15 tasks are for partitions
Vector(0, 1))
19/11/19 12:10:09 INFO TaskSchedulerImpl: Adding task set 132.0 with 2 tasks
19/11/19 12:10:09 INFO TaskSetManager: Starting task 0.0 in stage 132.0 (TID
159, localhost, executor driver, partition 0, NODE_LOCAL, 7760 bytes)
19/11/19 12:10:09 INFO Executor: Running task 0.0 in stage 132.0 (TID 159)
19/11/19 12:10:09 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks
including 1 local blocks and 0 remote blocks
19/11/19 12:10:09 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in
0 ms
19/11/19 12:10:09 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST
19/11/19 12:10:09 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST -> 0
artifacts
19/11/19 12:10:09 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/19 12:10:09 INFO main: Logging handler created.
19/11/19 12:10:09 INFO start: Status HTTP server running at localhost:36875
19/11/19 12:10:09 INFO main: semi_persistent_directory: /tmp
19/11/19 12:10:09 WARN _load_main_session: No session file found:
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive
session) may fail.
19/11/19 12:10:09 WARN get_all_options: Discarding unparseable args:
[u'--job_server_timeout=60',
u'--app_name=test_windowing_1574165406.49_188c0f80-1ada-4fa8-9544-3a1320d311e6',
u'--direct_runner_use_stacked_bundle', u'--spark_master=local',
u'--options_id=30', u'--enable_spark_metric_sinks', u'--pipeline_type_check']
19/11/19 12:10:09 INFO main: Python sdk harness started with pipeline_options:
{'runner': u'None', 'experiments': [u'beam_fn_api'],
'environment_cache_millis': u'0', 'environment_type': u'PROCESS',
'sdk_location': u'container', 'job_name': u'test_windowing_1574165406.49',
'environment_config': u'{"command":
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:37767'}
19/11/19 12:10:09 INFO __init__: Creating state cache with size 0
19/11/19 12:10:09 INFO __init__: Creating insecure control channel for
localhost:44147.
19/11/19 12:10:09 INFO __init__: Control channel established.
19/11/19 12:10:09 INFO __init__: Initializing SDKHarness with unbounded number
of workers.
19/11/19 12:10:09 INFO FnApiControlClientPoolService: Beam Fn Control client
connected with id 263-1
19/11/19 12:10:09 INFO create_state_handler: Creating insecure state channel
for localhost:36525.
19/11/19 12:10:09 INFO create_state_handler: State channel established.
19/11/19 12:10:09 INFO create_data_channel: Creating client data channel for
localhost:45241
19/11/19 12:10:09 INFO GrpcDataService: Beam Fn Data client connected.
19/11/19 12:10:10 INFO DefaultJobBundleFactory: Closing environment urn:
"beam:env:process:v1"
payload:
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh">
19/11/19 12:10:10 INFO run: No more requests from control plane
19/11/19 12:10:10 INFO run: SDK Harness waiting for in-flight requests to
complete
19/11/19 12:10:10 INFO close: Closing all cached grpc data channels.
19/11/19 12:10:10 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:10 INFO close: Closing all cached gRPC state handlers.
19/11/19 12:10:10 INFO run: Done consuming work.
19/11/19 12:10:10 INFO main: Python sdk harness exiting.
19/11/19 12:10:10 INFO GrpcLoggingService: Logging client hanged up.
19/11/19 12:10:10 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:10 INFO Executor: Finished task 0.0 in stage 132.0 (TID 159).
15229 bytes result sent to driver
19/11/19 12:10:10 INFO TaskSetManager: Starting task 1.0 in stage 132.0 (TID
160, localhost, executor driver, partition 1, PROCESS_LOCAL, 7977 bytes)
19/11/19 12:10:10 INFO Executor: Running task 1.0 in stage 132.0 (TID 160)
19/11/19 12:10:10 INFO TaskSetManager: Finished task 0.0 in stage 132.0 (TID
159) in 907 ms on localhost (executor driver) (1/2)
19/11/19 12:10:10 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST
19/11/19 12:10:10 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST -> 0
artifacts
19/11/19 12:10:10 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/19 12:10:10 INFO main: Logging handler created.
19/11/19 12:10:10 INFO start: Status HTTP server running at localhost:41319
19/11/19 12:10:10 INFO main: semi_persistent_directory: /tmp
19/11/19 12:10:10 WARN _load_main_session: No session file found:
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive
session) may fail.
19/11/19 12:10:10 WARN get_all_options: Discarding unparseable args:
[u'--job_server_timeout=60',
u'--app_name=test_windowing_1574165406.49_188c0f80-1ada-4fa8-9544-3a1320d311e6',
u'--direct_runner_use_stacked_bundle', u'--spark_master=local',
u'--options_id=30', u'--enable_spark_metric_sinks', u'--pipeline_type_check']
19/11/19 12:10:10 INFO main: Python sdk harness started with pipeline_options:
{'runner': u'None', 'experiments': [u'beam_fn_api'],
'environment_cache_millis': u'0', 'environment_type': u'PROCESS',
'sdk_location': u'container', 'job_name': u'test_windowing_1574165406.49',
'environment_config': u'{"command":
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:37767'}
19/11/19 12:10:10 INFO __init__: Creating state cache with size 0
19/11/19 12:10:10 INFO __init__: Creating insecure control channel for
localhost:39379.
19/11/19 12:10:10 INFO __init__: Control channel established.
19/11/19 12:10:10 INFO __init__: Initializing SDKHarness with unbounded number
of workers.
19/11/19 12:10:10 INFO FnApiControlClientPoolService: Beam Fn Control client
connected with id 264-1
19/11/19 12:10:10 INFO create_state_handler: Creating insecure state channel
for localhost:35443.
19/11/19 12:10:10 INFO create_state_handler: State channel established.
19/11/19 12:10:10 INFO create_data_channel: Creating client data channel for
localhost:36587
19/11/19 12:10:10 INFO GrpcDataService: Beam Fn Data client connected.
19/11/19 12:10:10 INFO DefaultJobBundleFactory: Closing environment urn:
"beam:env:process:v1"
payload:
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh">
19/11/19 12:10:10 INFO run: No more requests from control plane
19/11/19 12:10:10 INFO run: SDK Harness waiting for in-flight requests to
complete
19/11/19 12:10:10 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:10 INFO close: Closing all cached grpc data channels.
19/11/19 12:10:10 INFO close: Closing all cached gRPC state handlers.
19/11/19 12:10:10 INFO run: Done consuming work.
19/11/19 12:10:10 INFO main: Python sdk harness exiting.
19/11/19 12:10:10 INFO GrpcLoggingService: Logging client hanged up.
19/11/19 12:10:10 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:10 INFO Executor: Finished task 1.0 in stage 132.0 (TID 160).
13710 bytes result sent to driver
19/11/19 12:10:10 INFO TaskSetManager: Finished task 1.0 in stage 132.0 (TID
160) in 834 ms on localhost (executor driver) (2/2)
19/11/19 12:10:10 INFO TaskSchedulerImpl: Removed TaskSet 132.0, whose tasks
have all completed, from pool
19/11/19 12:10:10 INFO DAGScheduler: ShuffleMapStage 132 (flatMapToPair at
GroupNonMergingWindowsFunctions.java:115) finished in 1.746 s
19/11/19 12:10:10 INFO DAGScheduler: looking for newly runnable stages
19/11/19 12:10:10 INFO DAGScheduler: running: Set()
19/11/19 12:10:10 INFO DAGScheduler: waiting: Set(ResultStage 133)
19/11/19 12:10:10 INFO DAGScheduler: failed: Set()
19/11/19 12:10:10 INFO DAGScheduler: Submitting ResultStage 133
(EmptyOutputSink_0 MapPartitionsRDD[916] at flatMap at
SparkBatchPortablePipelineTranslator.java:311), which has no missing parents
19/11/19 12:10:10 INFO MemoryStore: Block broadcast_130 stored as values in
memory (estimated size 26.1 KB, free 13.5 GB)
19/11/19 12:10:10 INFO MemoryStore: Block broadcast_130_piece0 stored as bytes
in memory (estimated size 12.4 KB, free 13.5 GB)
19/11/19 12:10:10 INFO BlockManagerInfo: Added broadcast_130_piece0 in memory
on localhost:40511 (size: 12.4 KB, free: 13.5 GB)
19/11/19 12:10:10 INFO SparkContext: Created broadcast 130 from broadcast at
DAGScheduler.scala:1161
19/11/19 12:10:10 INFO DAGScheduler: Submitting 1 missing tasks from
ResultStage 133 (EmptyOutputSink_0 MapPartitionsRDD[916] at flatMap at
SparkBatchPortablePipelineTranslator.java:311) (first 15 tasks are for
partitions Vector(0))
19/11/19 12:10:10 INFO TaskSchedulerImpl: Adding task set 133.0 with 1 tasks
19/11/19 12:10:10 INFO TaskSetManager: Starting task 0.0 in stage 133.0 (TID
161, localhost, executor driver, partition 0, NODE_LOCAL, 7662 bytes)
19/11/19 12:10:10 INFO Executor: Running task 0.0 in stage 133.0 (TID 161)
19/11/19 12:10:10 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks
including 2 local blocks and 0 remote blocks
19/11/19 12:10:10 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in
0 ms
19/11/19 12:10:11 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST
19/11/19 12:10:11 INFO AbstractArtifactRetrievalService: GetManifest for
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST -> 0
artifacts
19/11/19 12:10:11 INFO GrpcLoggingService: Beam Fn Logging client connected.
19/11/19 12:10:11 INFO main: Logging handler created.
19/11/19 12:10:11 INFO start: Status HTTP server running at localhost:32995
19/11/19 12:10:11 INFO main: semi_persistent_directory: /tmp
19/11/19 12:10:11 WARN _load_main_session: No session file found:
/tmp/staged/pickled_main_session. Functions defined in __main__ (interactive
session) may fail.
19/11/19 12:10:11 WARN get_all_options: Discarding unparseable args:
[u'--job_server_timeout=60',
u'--app_name=test_windowing_1574165406.49_188c0f80-1ada-4fa8-9544-3a1320d311e6',
u'--direct_runner_use_stacked_bundle', u'--spark_master=local',
u'--options_id=30', u'--enable_spark_metric_sinks', u'--pipeline_type_check']
19/11/19 12:10:11 INFO main: Python sdk harness started with pipeline_options:
{'runner': u'None', 'experiments': [u'beam_fn_api'],
'environment_cache_millis': u'0', 'environment_type': u'PROCESS',
'sdk_location': u'container', 'job_name': u'test_windowing_1574165406.49',
'environment_config': u'{"command":
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh"}',>
'sdk_worker_parallelism': u'1', 'job_endpoint': u'localhost:37767'}
19/11/19 12:10:11 INFO __init__: Creating state cache with size 0
19/11/19 12:10:11 INFO __init__: Creating insecure control channel for
localhost:32915.
19/11/19 12:10:11 INFO FnApiControlClientPoolService: Beam Fn Control client
connected with id 265-1
19/11/19 12:10:11 INFO __init__: Control channel established.
19/11/19 12:10:11 INFO __init__: Initializing SDKHarness with unbounded number
of workers.
19/11/19 12:10:11 INFO create_state_handler: Creating insecure state channel
for localhost:44537.
19/11/19 12:10:11 INFO create_state_handler: State channel established.
19/11/19 12:10:11 INFO create_data_channel: Creating client data channel for
localhost:36515
19/11/19 12:10:11 INFO GrpcDataService: Beam Fn Data client connected.
19/11/19 12:10:11 INFO DefaultJobBundleFactory: Closing environment urn:
"beam:env:process:v1"
payload:
"\032\202\001<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build/sdk_worker.sh">
19/11/19 12:10:11 INFO run: No more requests from control plane
19/11/19 12:10:11 INFO run: SDK Harness waiting for in-flight requests to
complete
19/11/19 12:10:11 INFO close: Closing all cached grpc data channels.
19/11/19 12:10:11 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:11 INFO close: Closing all cached gRPC state handlers.
19/11/19 12:10:11 INFO run: Done consuming work.
19/11/19 12:10:11 INFO main: Python sdk harness exiting.
19/11/19 12:10:11 INFO GrpcLoggingService: Logging client hanged up.
19/11/19 12:10:11 WARN BeamFnDataGrpcMultiplexer: Hanged up for unknown
endpoint.
19/11/19 12:10:11 INFO Executor: Finished task 0.0 in stage 133.0 (TID 161).
11970 bytes result sent to driver
19/11/19 12:10:11 INFO TaskSetManager: Finished task 0.0 in stage 133.0 (TID
161) in 854 ms on localhost (executor driver) (1/1)
19/11/19 12:10:11 INFO TaskSchedulerImpl: Removed TaskSet 133.0, whose tasks
have all completed, from pool
19/11/19 12:10:11 INFO DAGScheduler: ResultStage 133 (foreach at
BoundedDataset.java:124) finished in 0.861 s
19/11/19 12:10:11 INFO DAGScheduler: Job 47 finished: foreach at
BoundedDataset.java:124, took 4.319864 s
19/11/19 12:10:11 INFO SparkPipelineRunner: Job
test_windowing_1574165406.49_188c0f80-1ada-4fa8-9544-3a1320d311e6 finished.
19/11/19 12:10:11 WARN SparkPipelineResult$BatchMode: Collecting monitoring
infos is not implemented yet in Spark portable runner.
19/11/19 12:10:11 INFO AbstractArtifactRetrievalService: Manifest at
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/MANIFEST has 0
artifact locations
19/11/19 12:10:11 INFO BeamFileSystemArtifactStagingService: Removed dir
/tmp/sparktestQ4BvfR/job_2eacb0f1-70cc-44ad-84a7-9dd6d7e00cd2/
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.
======================================================================
ERROR: test_pardo_state_with_custom_key_coder (__main__.SparkRunnerTest)
Tests that state requests work correctly when the key coder is an
----------------------------------------------------------------------
Traceback (most recent call last):
File "apache_beam/runners/portability/portable_runner_test.py", line 231, in
test_pardo_state_with_custom_key_coder
equal_to(expected))
File "apache_beam/pipeline.py", line 436, in __exit__
self.run().wait_until_finish()
File "apache_beam/runners/portability/portable_runner.py", line 428, in
wait_until_finish
for state_response in self._state_stream:
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",>
line 395, in next
return self._next()
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",>
line 552, in _next
_common.wait(self._state.condition.wait, _response_ready)
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py",>
line 140, in wait
_wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py",>
line 105, in _wait_once
wait_fn(timeout=timeout)
File "/usr/lib/python2.7/threading.py", line 359, in wait
_sleep(delay)
File "apache_beam/runners/portability/portable_runner_test.py", line 75, in
handler
raise BaseException(msg)
BaseException: Timed out after 60 seconds.
======================================================================
ERROR: test_pardo_timers (__main__.SparkRunnerTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "apache_beam/runners/portability/fn_api_runner_test.py", line 328, in
test_pardo_timers
assert_that(actual, equal_to(expected))
File "apache_beam/pipeline.py", line 436, in __exit__
self.run().wait_until_finish()
File "apache_beam/runners/portability/portable_runner.py", line 428, in
wait_until_finish
for state_response in self._state_stream:
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",>
line 395, in next
return self._next()
==================== Timed out after 60 seconds. ====================
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",>
line 552, in _next
_common.wait(self._state.condition.wait, _response_ready)
# Thread: <Thread(wait_until_finish_read, started daemon 140690732295936)>
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py",>
line 140, in wait
_wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
# Thread: <Thread(Thread-117, started daemon 140690723903232)>
File
"<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_common.py",>
line 105, in _wait_once
wait_fn(timeout=timeout)
File "/usr/lib/python2.7/threading.py", line 359, in wait
_sleep(delay)
File "apache_beam/runners/portability/portable_runner_test.py", line 75, in
handler
raise BaseException(msg)
BaseException: Timed out after 60 seconds.
# Thread: <_MainThread(MainThread, started 140691520108288)>
==================== Timed out after 60 seconds. ====================
======================================================================
ERROR: test_sdf_with_watermark_tracking (__main__.SparkRunnerTest)
----------------------------------------------------------------------
Traceback (most recent call last):
# Thread: <Thread(wait_until_finish_read, started daemon 140690243712768)>
File "apache_beam/runners/portability/fn_api_runner_test.py", line 499, in
test_sdf_with_watermark_tracking
assert_that(actual, equal_to(lis# Thread: <Thread(Thread-123, started
daemon 140690235320064)>
# Thread: <_MainThread(MainThread, started 140691520108288)>
t(''.join(data))))
File "apache_beam/pipeline.py", line 436, in __exit__
self.run().wait_until_finish()
File "apache_beam/runners/portability/portable_runner.py", line 438, in
wait_until_finish
self._job_id, self._state, self._last_error_message()))
RuntimeError: Pipeline
test_sdf_with_watermark_tracking_1574165397.18_d55c3a48-ad70-4e96-853b-ff6906bbce29
failed in state FAILED: java.lang.UnsupportedOperationException: The
ActiveBundle does not have a registered bundle checkpoint handler.
----------------------------------------------------------------------
Ran 38 tests in 299.689s
FAILED (errors=3, skipped=9)
> Task :sdks:python:test-suites:portable:py2:sparkValidatesRunner FAILED
FAILURE: Build failed with an exception.
* Where:
Build file
'<https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/ws/src/sdks/python/test-suites/portable/py2/build.gradle'>
line: 198
* What went wrong:
Execution failed for task
':sdks:python:test-suites:portable:py2:sparkValidatesRunner'.
> Process 'command 'sh'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with
Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See
https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 7m 40s
59 actionable tasks: 46 executed, 13 from cache
Publishing build scan...
https://gradle.com/s/qgxv2ufdcy6py
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]