See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/410/display/redirect>

Changes:


------------------------------------------
[...truncated 734.96 KB...]
  File "apache_beam/runners/common.py", line 1437, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1547, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1435, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1423, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1396, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1391, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 966, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 910, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 594, in run_inference
    self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 318, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 151, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 136, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']

      Worker ID: benchmark-tests-pytorch-i-10271304-hdq8-harness-plgl,

      Root cause: Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1435, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1423, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1396, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1391, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 966, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 910, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 594, in run_inference
    self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 318, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 151, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 136, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 297, in _execute
    response = task()
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 372, in <lambda>
    lambda: self.create_****().do_instruction(request), request)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 625, in do_instruction
    return getattr(self, request_type)(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 663, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 1062, in process_bundle
    op.finish()
  File "apache_beam/runners/****/operations.py", line 985, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 988, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 989, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/common.py", line 1514, in 
apache_beam.runners.common.DoFnRunner.finish
  File "apache_beam/runners/common.py", line 1495, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 1526, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1493, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 574, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 579, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 1785, in 
apache_beam.runners.common._OutputHandler.finish_bundle_outputs
  File "apache_beam/runners/****/operations.py", line 266, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 953, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 954, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1437, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1547, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1435, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1423, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1396, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1391, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 966, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 910, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 594, in run_inference
    self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 318, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 151, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 136, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']

      Worker ID: benchmark-tests-pytorch-i-10271304-hdq8-harness-plgl
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-27T20:31:45.746Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-27T20:34:15.531Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-27_13_04_49-9059036859726284558 is in state JOB_STATE_FAILED
ERROR:apache_beam.runners.dataflow.dataflow_runner:Console URL: 
https://console.cloud.google.com/dataflow/jobs/<RegionId>/2023-10-27_13_04_49-9059036859726284558?project=<ProjectId>
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 68, in <module>
    PytorchVisionBenchmarkTest().run()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/load_test.py";,>
 line 148, in run
    self.test()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 58, in test
    self.result = pytorch_image_classification.run(
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/examples/inference/pytorch_image_classification.py";,>
 line 166, in run
    result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py";,>
 line 771, in wait_until_finish
    raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow 
pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
  File "apache_beam/runners/common.py", line 1435, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1423, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1396, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1391, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 966, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 910, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 594, in run_inference
    self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 318, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 151, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 136, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 297, in _execute
    response = task()
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 372, in <lambda>
    lambda: self.create_****().do_instruction(request), request)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 625, in do_instruction
    return getattr(self, request_type)(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/sdk_****.py",
 line 663, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/runners/****/bundle_processor.py",
 line 1062, in process_bundle
    op.finish()
  File "apache_beam/runners/****/operations.py", line 985, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 988, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/****/operations.py", line 989, in 
apache_beam.runners.****.operations.DoOperation.finish
  File "apache_beam/runners/common.py", line 1514, in 
apache_beam.runners.common.DoFnRunner.finish
  File "apache_beam/runners/common.py", line 1495, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 1526, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1493, in 
apache_beam.runners.common.DoFnRunner._invoke_bundle_method
  File "apache_beam/runners/common.py", line 574, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 579, in 
apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
  File "apache_beam/runners/common.py", line 1785, in 
apache_beam.runners.common._OutputHandler.finish_bundle_outputs
  File "apache_beam/runners/****/operations.py", line 266, in 
apache_beam.runners.****.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/****/operations.py", line 953, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/****/operations.py", line 954, in 
apache_beam.runners.****.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1437, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1547, in 
apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1435, in 
apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 851, in 
apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 997, in 
apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1423, in process
    return self._run_inference(batch, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1396, in _run_inference
    raise e
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 1391, in _run_inference
    result_generator = self._model_handler.run_inference(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 966, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 910, in run_inference
    return self._base.run_inference(batch, model, inference_args)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/base.py",
 line 594, in run_inference
    self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 318, in run_inference
    return self._inference_fn(
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 151, in default_tensor_inference_fn
    batched_tensors = _convert_to_device(batched_tensors, device)
  File 
"/opt/apache/beam-venv/beam-venv-****-sdk-0-0/lib/python3.8/site-packages/apache_beam/ml/inference/pytorch_inference.py",
 line 136, in _convert_to_device
    examples = examples.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so 
the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
 [while running 'PyTorchRunInference/BeamML_RunInference-ptransform-73']


> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 63

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> error occurred

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Get more help at https://help.gradle.org.

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.4/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD FAILED in 39m 18s
9 actionable tasks: 4 executed, 5 up-to-date

Publishing build scan...
https://ge.apache.org/s/n6cgefj3dskos

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to