See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/388/display/redirect>

Changes:


------------------------------------------
[...truncated 69.84 KB...]
Collecting torchvision>=0.8.2 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19))
  Obtaining dependency information for torchvision>=0.8.2 from 
https://files.pythonhosted.org/packages/c9/52/d3f1c4253ad17e4ab08a2230fb184a3a180e2348db6c144c64977335b654/torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl.metadata
  Downloading torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl.metadata (6.6 
kB)
Collecting pillow>=8.0.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 20))
  Obtaining dependency information for pillow>=8.0.0 from 
https://files.pythonhosted.org/packages/cd/6d/07566c00ddb116a0eca1a623abda12da81099a6ff3200e5e6b7e2d3c8c2b/Pillow-10.0.1-cp38-cp38-manylinux_2_28_x86_64.whl.metadata
  Using cached Pillow-10.0.1-cp38-cp38-manylinux_2_28_x86_64.whl.metadata (9.5 
kB)
Collecting transformers>=4.18.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for transformers>=4.18.0 from 
https://files.pythonhosted.org/packages/1a/d1/3bba59606141ae808017f6fde91453882f931957f125009417b87a281067/transformers-4.34.0-py3-none-any.whl.metadata
  Using cached transformers-4.34.0-py3-none-any.whl.metadata (121 kB)
Requirement already satisfied: filelock in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (3.12.4)
Requirement already satisfied: typing-extensions in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (4.8.0)
Collecting sympy (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting fsspec (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for fsspec from 
https://files.pythonhosted.org/packages/fe/d3/e1aa96437d944fbb9cc95d0316e25583886e9cd9e6adc07baad943524eda/fsspec-2023.9.2-py3-none-any.whl.metadata
  Using cached fsspec-2023.9.2-py3-none-any.whl.metadata (6.7 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(23.7 MB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(823 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(14.1 MB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for nvidia-cudnn-cu12==8.9.2.26 from 
https://files.pythonhosted.org/packages/ff/74/a2e2be7fb83aaedec84f391f082cf765dfb635e7caa9b49065f73e4835d8/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata
  Using cached 
nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl 
(410.6 MB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl 
(121.6 MB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl 
(56.5 MB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl 
(124.2 MB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl 
(196.0 MB)
Collecting nvidia-nccl-cu12==2.18.1 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nccl_cu12-2.18.1-py3-none-manylinux1_x86_64.whl (209.8 MB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Collecting triton==2.1.0 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for triton==2.1.0 from 
https://files.pythonhosted.org/packages/72/98/34f43ed68ee6455ea874f749a5515c0600243186301ecd83819d942ce08a/triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
  Downloading 
triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
 (1.3 kB)
Collecting nvidia-nvjitlink-cu12 (from 
nvidia-cusolver-cu12==11.4.5.107->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for nvidia-nvjitlink-cu12 from 
https://files.pythonhosted.org/packages/0a/f8/5193b57555cbeecfdb6ade643df0d4218cc6385485492b6e2f64ceae53bb/nvidia_nvjitlink_cu12-12.2.140-py3-none-manylinux1_x86_64.whl.metadata
  Using cached 
nvidia_nvjitlink_cu12-12.2.140-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: numpy in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.24.4)
Requirement already satisfied: requests in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2.31.0)
Collecting huggingface-hub<1.0,>=0.16.4 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for huggingface-hub<1.0,>=0.16.4 from 
https://files.pythonhosted.org/packages/aa/f3/3fc97336a0e90516901befd4f500f08d691034d387406fdbde85bea827cc/huggingface_hub-0.17.3-py3-none-any.whl.metadata
  Using cached huggingface_hub-0.17.3-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: packaging>=20.0 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (23.2)
Requirement already satisfied: pyyaml>=5.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (2023.10.3)
Collecting tokenizers<0.15,>=0.14 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for tokenizers<0.15,>=0.14 from 
https://files.pythonhosted.org/packages/59/15/c60dae8646210e148e8432fbb5a13d1f6fa8cefda6314ff6c4fc0b58b6ec/tokenizers-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading 
tokenizers-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (6.7 kB)
Collecting safetensors>=0.3.1 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for safetensors>=0.3.1 from 
https://files.pythonhosted.org/packages/21/12/d95158b4fdd0422faf019038be0be874d7bf3d9f9bd0b1b529f73853cec2/safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached 
safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (4.7 kB)
Collecting tqdm>=4.27 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for tqdm>=4.27 from 
https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting huggingface-hub<1.0,>=0.16.4 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for huggingface-hub<1.0,>=0.16.4 from 
https://files.pythonhosted.org/packages/7f/c4/adcbe9a696c135578cabcbdd7331332daad4d49b7c43688bc2d36b3a47d2/huggingface_hub-0.16.4-py3-none-any.whl.metadata
  Using cached huggingface_hub-0.16.4-py3-none-any.whl.metadata (12 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for MarkupSafe>=2.0 from 
https://files.pythonhosted.org/packages/de/e2/32c14301bb023986dff527a49325b6259cab4ebb4633f69de54af312fc45/MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached 
MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (3.0 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.3.0)
Requirement already satisfied: idna<4,>=2.5 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.26.17)
Requirement already satisfied: certifi>=2017.4.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2023.7.22)
Collecting mpmath>=0.19 (from sympy->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Downloading torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl (670.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 670.2/670.2 MB 1.1 MB/s eta 0:00:00
Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 
MB)
Downloading 
triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 
MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.2/89.2 MB 8.0 MB/s eta 0:00:00
Downloading torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl (6.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.9/6.9 MB 87.4 MB/s eta 0:00:00
Using cached Pillow-10.0.1-cp38-cp38-manylinux_2_28_x86_64.whl (3.6 MB)
Using cached transformers-4.34.0-py3-none-any.whl (7.7 MB)
Using cached 
safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 
MB)
Downloading 
tokenizers-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 
MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 86.6 MB/s eta 0:00:00
Using cached huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached fsspec-2023.9.2-py3-none-any.whl (173 kB)
Using cached 
MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 
kB)
Using cached nvidia_nvjitlink_cu12-12.2.140-py3-none-manylinux1_x86_64.whl 
(20.2 MB)
Installing collected packages: safetensors, mpmath, triton, tqdm, sympy, 
pillow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, 
nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, 
nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, networkx, 
MarkupSafe, fsspec, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jinja2, 
huggingface-hub, tokenizers, nvidia-cusolver-cu12, transformers, torch, 
torchvision
Successfully installed MarkupSafe-2.1.3 fsspec-2023.9.2 huggingface-hub-0.16.4 
jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu12-12.1.3.1 
nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 
nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 
nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 
nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 
nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.2.140 
nvidia-nvtx-cu12-12.1.105 pillow-10.0.1 safetensors-0.3.3 sympy-1.12 
tokenizers-0.14.0 torch-2.1.0 torchvision-0.16.0 tqdm-4.66.1 
transformers-4.34.0 triton-2.1.0
INFO:root:Device is set to CPU
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:apache_beam.runners.portability.stager:Executing command: 
['<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/bin/python',>
 '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 
'/tmp/tmpgfx3lwu1/tmp_requirements.txt', '--exists-action', 'i', '--no-deps', 
'--implementation', 'cp', '--abi', 'cp38', '--platform', 'manylinux2014_x86_64']
INFO:apache_beam.runners.portability.stager:Copying Beam SDK 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/build/apache-beam.tar.gz";>
 to staging location.
INFO:apache_beam.runners.dataflow.dataflow_runner:Pipeline has additional 
dependencies to be installed in SDK **** container, consider using the SDK 
container image pre-building workflow to avoid repetitive installations. Learn 
more on 
https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild
INFO:root:Using provided Python SDK container image: 
gcr.io/cloud-dataflow/v1beta3/beam_python3.8_sdk:beam-master-20230927
INFO:root:Python SDK container image set to 
"gcr.io/cloud-dataflow/v1beta3/beam_python3.8_sdk:beam-master-20230927" for 
Docker environment
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function pack_combiners at 0x7fadeadef9d0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function sort_stages at 0x7fadeadee1f0> ====================
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/requirements.txt...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/requirements.txt
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/mock-2.0.0-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/mock-2.0.0-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/seaborn-0.13.0-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/seaborn-0.13.0-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/PyHamcrest-1.10.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/PyHamcrest-1.10.1-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/transformers-4.34.0-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/transformers-4.34.0-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/parameterized-0.7.5-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/parameterized-0.7.5-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl
 in 36 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/matplotlib-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/matplotlib-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/matplotlib-3.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/matplotlib-3.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 2 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/dataflow_python_sdk.tar...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/dataflow_python_sdk.tar
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/pipeline.pb...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python1005165401.1696528842.055910/pipeline.pb
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 clientRequestId: '20231005180042057154-7304'
 createTime: '2023-10-05T18:01:26.346617Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2023-10-05_11_01_25-10119919126153232003'
 location: 'us-central1'
 name: 'benchmark-tests-pytorch-imagenet-python1005165401'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2023-10-05T18:01:26.346617Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: 
[2023-10-05_11_01_25-10119919126153232003]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 
2023-10-05_11_01_25-10119919126153232003
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow 
monitoring console, please navigate to 
https://console.cloud.google.com/dataflow/jobs/us-central1/2023-10-05_11_01_25-10119919126153232003?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-05_11_01_25-10119919126153232003 is in state JOB_STATE_PENDING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:26.789Z: 
JOB_MESSAGE_BASIC: The pipeline is using shuffle service with a (boot) 
persistent disk size / type other than the default. If that configuration was 
intended solely to speed up the non-service shuffle, consider removing it to 
reduce costs as those disks are unused by the shuffle service.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:29.721Z: 
JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-b.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:32.344Z: 
JOB_MESSAGE_BASIC: Executing operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/EmitSource+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:32.364Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3759>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:32.467Z: 
JOB_MESSAGE_BASIC: Starting 75 ****s in us-central1-b...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-05_11_01_25-10119919126153232003 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:01:53.324Z: 
JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric 
descriptors, so new user metrics of the form custom.googleapis.com/* will not 
be created. However, all user metrics are also available in the metric 
dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, 
you can delete old / unused metric descriptors. See 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
 and 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:48.757Z: 
JOB_MESSAGE_BASIC: All ****s have finished the startup processes and began to 
receive work requests.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.761Z: 
JOB_MESSAGE_BASIC: Finished operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/EmitSource+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.801Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3759>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.894Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.914Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.939Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.961Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.982Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:49.993Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:50.028Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:50.660Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:06:50.761Z: 
JOB_MESSAGE_BASIC: Executing operation 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:39.278Z: 
JOB_MESSAGE_BASIC: Finished operation 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:39.333Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Close
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:40.215Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Close
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:40.277Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Read+WriteOutputToGCS/Write/WriteImpl/WriteBundles
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.220Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Read+WriteOutputToGCS/Write/WriteImpl/WriteBundles
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.298Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.317Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.347Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.361Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:42.453Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:44.206Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:44.295Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input2
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:44.342Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input2
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:44.434Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:46.338Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:16:46.520Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-05T18:19:12.932Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-05_11_01_25-10119919126153232003 is in state JOB_STATE_DONE
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Load test results 
for test: 8751d3f945c2417082da42f11a4c724f and timestamp: 1696529962.372892:
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_num_inferences Value: 
50000
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_request_batch_byte_size
 Value: 4894
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_request_batch_byte_size
 Value: 3711
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_request_batch_byte_size
 Value: 102
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_request_batch_byte_size
 Value: 4473085
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_request_batch_byte_size
 Value: 913
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_request_batch_size
 Value: 4894
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_request_batch_size
 Value: 42
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_request_batch_size
 Value: 1
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_request_batch_size
 Value: 50000
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_request_batch_size
 Value: 10
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_batch_latency_micro_secs
 Value: 4894
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_batch_latency_micro_secs
 Value: 23602903
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_batch_latency_micro_secs
 Value: 290347
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_batch_latency_micro_secs
 Value: 22039683117
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_batch_latency_micro_secs
 Value: 4503408
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_count_load_model_latency_milli_secs
 Value: 150
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_max_load_model_latency_milli_secs
 Value: 174906
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_min_load_model_latency_milli_secs
 Value: 71351
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_load_model_latency_milli_secs
 Value: 13654671
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_load_model_latency_milli_secs
 Value: 91031
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_count_model_byte_size 
Value: 150
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_max_model_byte_size 
Value: 435257344
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_min_model_byte_size 
Value: 419041280
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_model_byte_size 
Value: 65102397440
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: 
BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_model_byte_size 
Value: 434015982

> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 62

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> Could not get unknown property 'execResult' for task 
> ':sdks:python:apache_beam:testing:load_tests:run' of type 
> org.gradle.api.tasks.Exec.

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Get more help at https://help.gradle.org.

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.3/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD FAILED in 22m 57s
9 actionable tasks: 8 executed, 1 from cache

Publishing build scan...
https://ge.apache.org/s/fbz2bmygxzeua

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to