See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/325/display/redirect?page=changes>

Changes:

[noreply] Bump github.com/fsouza/fake-gcs-server from 1.45.2 to 1.46.0 in /sdks


------------------------------------------
[...truncated 660.93 KB...]
  File "apache_beam/runners/common.py", line 1454, in 
apache_beam.runners.common.DoFnRunner.process_with_sized_restriction
  File "apache_beam/runners/common.py", line 819, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 983, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1533, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 916, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 885, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 881, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 488, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 432, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 280, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: misaligned address
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']

      Worker ID: benchmark-tests-pytorch-i-07310755-xj4q-harness-r29j
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:25:06.253Z: 
JOB_MESSAGE_DETAILED: Cleaning up.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:25:06.417Z: 
JOB_MESSAGE_DEBUG: Starting **** pool teardown.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:25:06.441Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:27:30.039Z: 
JOB_MESSAGE_DETAILED: Autoscaling: Resized **** pool from 28 to 0.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:27:30.072Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-07-31T15:27:30.092Z: 
JOB_MESSAGE_DEBUG: Tearing down pending resources...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-07-31_07_55_33-4390270479772149698 is in state JOB_STATE_FAILED
ERROR:apache_beam.runners.dataflow.dataflow_runner:Console URL: 
https://console.cloud.google.com/dataflow/jobs/<RegionId>/2023-07-31_07_55_33-4390270479772149698?project=<ProjectId>
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 68, in <module>
    PytorchVisionBenchmarkTest().run()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/load_test.py";,>
 line 148, in run
    self.test()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 58, in test
    self.result = pytorch_image_classification.run(
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/examples/inference/pytorch_image_classification.py";,>
 line 166, in run
    result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py";,>
 line 756, in wait_until_finish
    raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow 
pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 916, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 885, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 881, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 488, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 432, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 280, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: misaligned address
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 297, in _execute
    response = task()
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 372, in <lambda>
    lambda: self.create_****().do_instruction(request), request)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 625, in do_instruction
    return getattr(self, request_type)(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 663, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 1040, in process_bundle
    input_op_by_transform_id[element.transform_id].process_encoded(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 232, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/****/operations.py", line 568, in 
apache_beam.runners.****.operations.Operation.output
  File "apache_beam/runners/****/operations.py", line 570, in 
apache_beam.runners.****.operations.Operation.output
  File "apache_beam/runners/****/operations.py", line 261, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 1065, in 
apache_beam.runners.****.operations.SdfProcessSizedElements.process
  File "apache_beam/runners/****/operations.py", line 1074, in 
apache_beam.runners.****.operations.SdfProcessSizedElements.process
  File "apache_beam/runners/common.py", line 1454, in 
apache_beam.runners.common.DoFnRunner.process_with_sized_restriction
  File "apache_beam/runners/common.py", line 819, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 983, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 625, in 
apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1607, in 
apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1720, in 
apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1533, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 916, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 885, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 881, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 488, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 432, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 280, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: misaligned address
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']


> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 63

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> error occurred

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 36m 47s
15 actionable tasks: 4 executed, 11 up-to-date

Publishing build scan...

Publishing failed.

The response from https://ge.apache.org/scans/publish/gradle/3.13.2/token was 
not from Gradle Enterprise.
The specified server address may be incorrect, or your network environment may 
be interfering.

Please report this problem to your Gradle Enterprise administrator via 
https://ge.apache.org/help and include the following via copy/paste:

----------
Gradle version: 7.6.2
Plugin version: 3.13.2
Request URL: https://ge.apache.org/scans/publish/gradle/3.13.2/token
Request ID: e6953b61-4ead-4ab9-8058-4a0397a802df
Response status code: 502
Response content type: text/html
----------

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to