See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/333/display/redirect>

Changes:


------------------------------------------
[...truncated 660.68 KB...]
  File "apache_beam/runners/common.py", line 1533, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 991, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 960, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 956, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 489, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 433, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 281, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']

      Worker ID: benchmark-tests-pytorch-i-08071239-6v6n-harness-wrr0,

      Root cause: Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 991, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 960, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 956, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 489, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 433, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 281, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 297, in _execute
    response = task()
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 372, in <lambda>
    lambda: self.create_****().do_instruction(request), request)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 625, in do_instruction
    return getattr(self, request_type)(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 663, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 1046, in process_bundle
    op.finish()
  File "apache_beam/runners/****/operations.py", line 983, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 986, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 987, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/common.py", line 1501, in 
apache_beam.runners.common.DoFnRunner.finish
  File "apache_beam/runners/common.py", line 1482, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1480, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 563, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 568, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 1771, in 
apache_beam.runners.common._OutputHandler.finish_bundle_outputs
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1533, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 991, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 960, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 956, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 489, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 433, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 281, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']

      Worker ID: benchmark-tests-pytorch-i-08071239-6v6n-harness-wrr0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-07T20:04:05.169Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-07T20:06:26.912Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-08-07_12_39_55-15561009488554470429 is in state JOB_STATE_FAILED
ERROR:apache_beam.runners.dataflow.dataflow_runner:Console URL: 
https://console.cloud.google.com/dataflow/jobs/<RegionId>/2023-08-07_12_39_55-15561009488554470429?project=<ProjectId>
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 68, in <module>
    PytorchVisionBenchmarkTest().run()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/load_test.py";,>
 line 148, in run
    self.test()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 58, in test
    self.result = pytorch_image_classification.run(
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/examples/inference/pytorch_image_classification.py";,>
 line 166, in run
    result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py";,>
 line 766, in wait_until_finish
    raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow 
pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 991, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 960, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 956, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 489, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 433, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 281, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 297, in _execute
    response = task()
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 372, in <lambda>
    lambda: self.create_****().do_instruction(request), request)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 625, in do_instruction
    return getattr(self, request_type)(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 663, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 1046, in process_bundle
    op.finish()
  File "apache_beam/runners/****/operations.py", line 983, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 986, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 987, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/common.py", line 1501, in 
apache_beam.runners.common.DoFnRunner.finish
  File "apache_beam/runners/common.py", line 1482, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 1513, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1480, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 563, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 568, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 1771, in 
apache_beam.runners.common._OutputHandler.finish_bundle_outputs
  File "apache_beam/runners/****/operations.py", line 264, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 951, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 952, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1425, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1533, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1423, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 839, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 985, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 991, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 960, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 956, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 489, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 433, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 281, in run_inference
    keys, self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 317, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 150, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 135, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']


> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 63

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> error occurred

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 31m 11s
15 actionable tasks: 4 executed, 11 up-to-date

Publishing build scan...
https://ge.apache.org/s/6ioj6ddoctpb4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to