See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/269/display/redirect?page=changes>

Changes:

[noreply] Add new Beam Python SDK examples (#26671)


------------------------------------------
[...truncated 41.04 KB...]
> Task :sdks:python:apache_beam:testing:load_tests:run
Collecting torch>=1.7.1 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached torch-2.0.1-cp38-cp38-manylinux1_x86_64.whl (619.9 MB)
Collecting torchvision>=0.8.2 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19))
  Using cached torchvision-0.15.2-cp38-cp38-manylinux1_x86_64.whl (33.8 MB)
Collecting pillow>=8.0.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 20))
  Using cached Pillow-9.5.0-cp38-cp38-manylinux_2_28_x86_64.whl (3.4 MB)
Collecting transformers>=4.18.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached transformers-4.29.2-py3-none-any.whl (7.1 MB)
Requirement already satisfied: filelock in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (3.12.0)
Requirement already satisfied: typing-extensions in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (4.6.3)
Collecting sympy (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting nvidia-cuda-nvrtc-cu11==11.7.99 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl 
(21.0 MB)
Collecting nvidia-cuda-runtime-cu11==11.7.99 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl 
(849 kB)
Collecting nvidia-cuda-cupti-cu11==11.7.101 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_cupti_cu11-11.7.101-py3-none-manylinux1_x86_64.whl 
(11.8 MB)
Collecting nvidia-cudnn-cu11==8.5.0.96 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl 
(557.1 MB)
Collecting nvidia-cublas-cu11==11.10.3.66 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl 
(317.1 MB)
Collecting nvidia-cufft-cu11==10.9.0.58 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cufft_cu11-10.9.0.58-py3-none-manylinux1_x86_64.whl 
(168.4 MB)
Collecting nvidia-curand-cu11==10.2.10.91 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_curand_cu11-10.2.10.91-py3-none-manylinux1_x86_64.whl 
(54.6 MB)
Collecting nvidia-cusolver-cu11==11.4.0.1 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusolver_cu11-11.4.0.1-2-py3-none-manylinux1_x86_64.whl 
(102.6 MB)
Collecting nvidia-cusparse-cu11==11.7.4.91 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusparse_cu11-11.7.4.91-py3-none-manylinux1_x86_64.whl 
(173.2 MB)
Collecting nvidia-nccl-cu11==2.14.3 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl (177.1 MB)
Collecting nvidia-nvtx-cu11==11.7.91 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl (98 kB)
Collecting triton==2.0.0 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
triton-2.0.0-1-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (63.2 
MB)
Requirement already satisfied: setuptools in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from nvidia-cublas-cu11==11.10.3.66->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18)) (67.8.0)
Requirement already satisfied: wheel in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from nvidia-cublas-cu11==11.10.3.66->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18)) (0.40.0)
Collecting cmake (from triton==2.0.0->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
cmake-3.26.3-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (24.0 
MB)
Collecting lit (from triton==2.0.0->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached lit-16.0.5.post0-py3-none-any.whl
Requirement already satisfied: numpy in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.24.3)
Requirement already satisfied: requests in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2.31.0)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
Requirement already satisfied: packaging>=20.0 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (23.1)
Requirement already satisfied: pyyaml>=5.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (6.0)
Requirement already satisfied: regex!=2019.12.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (2023.6.3)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached 
tokenizers-0.13.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 
MB)
Collecting tqdm>=4.27 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting fsspec (from huggingface-hub<1.0,>=0.14.1->transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached fsspec-2023.5.0-py3-none-any.whl (160 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 
kB)
Requirement already satisfied: charset-normalizer<4,>=2 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2023.5.7)
Collecting mpmath>=0.19 (from sympy->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: tokenizers, mpmath, lit, cmake, tqdm, sympy, 
pillow, nvidia-nvtx-cu11, nvidia-nccl-cu11, nvidia-cusparse-cu11, 
nvidia-curand-cu11, nvidia-cufft-cu11, nvidia-cuda-runtime-cu11, 
nvidia-cuda-nvrtc-cu11, nvidia-cuda-cupti-cu11, nvidia-cublas-cu11, networkx, 
MarkupSafe, fsspec, nvidia-cusolver-cu11, nvidia-cudnn-cu11, jinja2, 
huggingface-hub, transformers, triton, torch, torchvision
Successfully installed MarkupSafe-2.1.3 cmake-3.26.3 fsspec-2023.5.0 
huggingface-hub-0.15.1 jinja2-3.1.2 lit-16.0.5.post0 mpmath-1.3.0 networkx-3.1 
nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 
nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 
nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 
nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 
nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 
pillow-9.5.0 sympy-1.12 tokenizers-0.13.3 torch-2.0.1 torchvision-0.15.2 
tqdm-4.65.0 transformers-4.29.2 triton-2.0.0
INFO:root:Device is set to CPU
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:apache_beam.runners.portability.stager:Executing command: 
['<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/bin/python',>
 '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 
'/tmp/tmp8u6gtez0/tmp_requirements.txt', '--exists-action', 'i', '--no-deps', 
'--implementation', 'cp', '--abi', 'cp38', '--platform', 'manylinux2014_x86_64']
INFO:apache_beam.runners.portability.stager:Copying Beam SDK 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/build/apache-beam.tar.gz";>
 to staging location.
INFO:root:Using provided Python SDK container image: 
gcr.io/cloud-dataflow/v1beta3/python38:beam-master-20230601
INFO:root:Python SDK container image set to 
"gcr.io/cloud-dataflow/v1beta3/python38:beam-master-20230601" for Docker 
environment
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function pack_combiners at 0x7fbf724c29d0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function sort_stages at 0x7fbf724c31f0> ====================
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/requirements.txt...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/requirements.txt
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit-learn-1.0.2.tar.gz...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit-learn-1.0.2.tar.gz
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/mock-2.0.0-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/mock-2.0.0-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/seaborn-0.12.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/seaborn-0.12.2-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/PyHamcrest-1.10.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/PyHamcrest-1.10.1-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.0-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.0-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.1-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/transformers-4.29.2-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/inflection-0.5.1-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/inflection-0.5.1-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/beautifulsoup4-4.12.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/beautifulsoup4-4.12.2-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/parameterized-0.7.5-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/parameterized-0.7.5-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/torch-2.0.1-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/torch-2.0.1-cp38-cp38-manylinux1_x86_64.whl
 in 44 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/torchvision-0.15.2-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/torchvision-0.15.2-cp38-cp38-manylinux1_x86_64.whl
 in 2 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/Pillow-9.5.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/Pillow-9.5.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/matplotlib-3.7.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/dataflow_python_sdk.tar...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/dataflow_python_sdk.tar
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/pipeline.pb...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/pipeline.pb
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 clientRequestId: '20230605175917089483-5980'
 createTime: '2023-06-05T18:00:16.432392Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2023-06-05_11_00_14-1504634665734342231'
 location: 'us-central1'
 name: 'benchmark-tests-pytorch-imagenet-python0605155620'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2023-06-05T18:00:16.432392Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: 
[2023-06-05_11_00_14-1504634665734342231]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 
2023-06-05_11_00_14-1504634665734342231
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow 
monitoring console, please navigate to 
https://console.cloud.google.com/dataflow/jobs/us-central1/2023-06-05_11_00_14-1504634665734342231?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-06-05_11_00_14-1504634665734342231 is in state JOB_STATE_PENDING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:14.386Z: 
JOB_MESSAGE_BASIC: Dataflow Runner V2 auto-enabled. Use 
--experiments=disable_runner_v2 to opt out.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:16.910Z: 
JOB_MESSAGE_BASIC: The pipeline is using shuffle service with a (boot) 
persistent disk size / type other than the default. If that configuration was 
intended solely to speed up the non-service shuffle, consider removing it to 
reduce costs as those disks are unused by the shuffle service.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:19.423Z: 
JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-a.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.434Z: 
JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable 
parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.466Z: 
JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into 
optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.553Z: 
JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.587Z: 
JOB_MESSAGE_DEBUG: Combiner lifting skipped for step 
WriteOutputToGCS/Write/WriteImpl/GroupByKey: GroupByKey not followed by a 
combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.622Z: 
JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.659Z: 
JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.698Z: 
JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.734Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/InitializeWrite into 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.767Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3634>) into 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.792Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode) into 
WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3634>)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.830Z: 
JOB_MESSAGE_DETAILED: Fusing consumer ReadImageNames/Read/Map(<lambda at 
iobase.py:908>) into ReadImageNames/Read/Impulse
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.861Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction
 into ReadImageNames/Read/Map(<lambda at iobase.py:908>)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.888Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
 into 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.915Z: 
JOB_MESSAGE_DETAILED: Fusing consumer FilterEmptyLines into 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.946Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
PyTorchRunInference/BeamML_RunInference_Preprocess-0 into FilterEmptyLines
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:20.986Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn) into 
PyTorchRunInference/BeamML_RunInference_Preprocess-0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.020Z: 
JOB_MESSAGE_DETAILED: Fusing consumer PyTorchRunInference/BeamML_RunInference 
into PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.046Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
PyTorchRunInference/BeamML_RunInference_Postprocess-0 into 
PyTorchRunInference/BeamML_RunInference
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.084Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/Map(<lambda at iobase.py:1140>) into 
PyTorchRunInference/BeamML_RunInference_Postprocess-0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.114Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn) into 
WriteOutputToGCS/Write/WriteImpl/Map(<lambda at iobase.py:1140>)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.147Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write into 
WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.179Z: 
JOB_MESSAGE_DETAILED: Fusing consumer 
WriteOutputToGCS/Write/WriteImpl/WriteBundles into 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.226Z: 
JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.254Z: 
JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.285Z: 
JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.318Z: 
JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.458Z: 
JOB_MESSAGE_DEBUG: Executing wait step start19
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.529Z: 
JOB_MESSAGE_BASIC: Executing operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/Map(<lambda at 
iobase.py:908>)+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-06-05_11_00_14-1504634665734342231 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.559Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3634>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.573Z: 
JOB_MESSAGE_DEBUG: Starting **** pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:21.609Z: 
JOB_MESSAGE_BASIC: Starting 75 ****s in us-central1-a...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:00:47.551Z: 
JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric 
descriptors, so new user metrics of the form custom.googleapis.com/* will not 
be created. However, all user metrics are also available in the metric 
dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, 
you can delete old / unused metric descriptors. See 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
 and 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:01:16.671Z: 
JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of ****s to 73 based on 
the rate of progress in the currently running stage(s).
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:01:16.705Z: 
JOB_MESSAGE_DETAILED: Autoscaling: Resized **** pool to 73, though goal was 75. 
 This could be a quota issue.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:01:26.441Z: 
JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of ****s to 75 based on 
the rate of progress in the currently running stage(s).
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T18:01:38.532Z: 
JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.160Z: 
JOB_MESSAGE_ERROR: Workflow failed. Causes: The Dataflow job appears to be 
stuck because no **** activity has been seen in the last 1h. Please check the 
**** logs in Stackdriver Logging. You can also get help with Cloud Dataflow at 
https://cloud.google.com/dataflow/support.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.249Z: 
JOB_MESSAGE_BASIC: Cancel request is committed for workflow job: 
2023-06-05_11_00_14-1504634665734342231.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.269Z: 
JOB_MESSAGE_ERROR: The Dataflow job appears to be stuck because no **** 
activity has been seen in the last 1h. Please check the **** logs in 
Stackdriver Logging. You can also get help with Cloud Dataflow at 
https://cloud.google.com/dataflow/support.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.332Z: 
JOB_MESSAGE_WARNING: Unable to delete temp files: 
"gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/dax-tmp-2023-06-05_11_00_14-1504634665734342231-S01-0-255fce0cb172e9cb/[email protected]."
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.334Z: 
JOB_MESSAGE_WARNING: Unable to delete temp files: 
"gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/dax-tmp-2023-06-05_11_00_14-1504634665734342231-S04-0-e309cf48a93ea1d0/[email protected]."
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.342Z: 
JOB_MESSAGE_WARNING: Unable to delete temp files: 
"gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0605155620.1685987957.088359/dax-tmp-2023-06-05_11_00_14-1504634665734342231-S01-1-b67e6e2d8dcc930c/[email protected]."
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.361Z: 
JOB_MESSAGE_WARNING: 
S01:WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3634>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
 failed.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.361Z: 
JOB_MESSAGE_WARNING: 
S04:ReadImageNames/Read/Impulse+ReadImageNames/Read/Map(<lambda at 
iobase.py:908>)+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
 failed.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.387Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3634>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:22.387Z: 
JOB_MESSAGE_BASIC: Finished operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/Map(<lambda at 
iobase.py:908>)+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:23.225Z: 
JOB_MESSAGE_DETAILED: Cleaning up.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:23.270Z: 
JOB_MESSAGE_DEBUG: Starting **** pool teardown.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:00:23.294Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:02:49.617Z: 
JOB_MESSAGE_DETAILED: Autoscaling: Resized **** pool from 75 to 0.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:02:49.649Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-06-05T19:02:49.670Z: 
JOB_MESSAGE_DEBUG: Tearing down pending resources...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-06-05_11_00_14-1504634665734342231 is in state JOB_STATE_FAILED
ERROR:apache_beam.runners.dataflow.dataflow_runner:Console URL: 
https://console.cloud.google.com/dataflow/jobs/<RegionId>/2023-06-05_11_00_14-1504634665734342231?project=<ProjectId>
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 68, in <module>
    PytorchVisionBenchmarkTest().run()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/load_test.py";,>
 line 148, in run
    self.test()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 58, in test
    self.result = pytorch_image_classification.run(
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/examples/inference/pytorch_image_classification.py";,>
 line 166, in run
    result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py";,>
 line 1563, in wait_until_finish
    raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow 
pipeline failed. State: FAILED, Error:
The Dataflow job appears to be stuck because no **** activity has been seen in 
the last 1h. Please check the **** logs in Stackdriver Logging. You can also 
get help with Cloud Dataflow at https://cloud.google.com/dataflow/support.

> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 63

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> error occurred

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1h 7m 17s
15 actionable tasks: 9 executed, 4 from cache, 2 up-to-date

Publishing build scan...
https://ge.apache.org/s/pglraxumjhq5g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to