See 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/401/display/redirect>

Changes:


------------------------------------------
[...truncated 619.14 KB...]
Collecting pillow>=8.0.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 20))
  Using cached Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl.metadata (9.5 
kB)
Collecting transformers>=4.18.0 (from -r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached transformers-4.34.0-py3-none-any.whl.metadata (121 kB)
Requirement already satisfied: filelock in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (3.12.4)
Requirement already satisfied: typing-extensions in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt 
(line 18)) (4.8.0)
Collecting sympy (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting fsspec (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached fsspec-2023.9.2-py3-none-any.whl.metadata (6.7 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(23.7 MB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(823 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl 
(14.1 MB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl 
(410.6 MB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl 
(121.6 MB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl 
(56.5 MB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl 
(124.2 MB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl 
(196.0 MB)
Collecting nvidia-nccl-cu12==2.18.1 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nccl_cu12-2.18.1-py3-none-manylinux1_x86_64.whl (209.8 MB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Collecting triton==2.1.0 (from torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
 (1.3 kB)
Collecting nvidia-nvjitlink-cu12 (from 
nvidia-cusolver-cu12==11.4.5.107->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
nvidia_nvjitlink_cu12-12.2.140-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: numpy in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.24.4)
Requirement already satisfied: requests in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2.31.0)
Collecting huggingface-hub<1.0,>=0.16.4 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached huggingface_hub-0.18.0-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: packaging>=20.0 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (23.2)
Requirement already satisfied: pyyaml>=5.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (2023.10.3)
Collecting tokenizers<0.15,>=0.14 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached 
tokenizers-0.14.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (6.7 kB)
Collecting safetensors>=0.3.1 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached 
safetensors-0.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (3.8 kB)
Collecting tqdm>=4.27 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting huggingface-hub<1.0,>=0.16.4 (from transformers>=4.18.0->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached huggingface_hub-0.17.3-py3-none-any.whl.metadata (13 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached 
MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
 (3.0 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.3.0)
Requirement already satisfied: idna<4,>=2.5 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in 
<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages>
 (from requests->torchvision>=0.8.2->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2023.7.22)
Collecting mpmath>=0.19 (from sympy->torch>=1.7.1->-r 
apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl (670.2 MB)
Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 
MB)
Using cached 
triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 
MB)
Using cached torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl (6.9 MB)
Using cached Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl (3.6 MB)
Using cached transformers-4.34.0-py3-none-any.whl (7.7 MB)
Using cached fsspec-2023.9.2-py3-none-any.whl (173 kB)
Using cached 
safetensors-0.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 
MB)
Using cached 
tokenizers-0.14.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 
MB)
Using cached huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached 
MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 
kB)
Using cached nvidia_nvjitlink_cu12-12.2.140-py3-none-manylinux1_x86_64.whl 
(20.2 MB)
Installing collected packages: mpmath, triton, tqdm, sympy, safetensors, 
pillow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, 
nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, 
nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, networkx, 
MarkupSafe, fsspec, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jinja2, 
huggingface-hub, tokenizers, nvidia-cusolver-cu12, transformers, torch, 
torchvision
Successfully installed MarkupSafe-2.1.3 fsspec-2023.9.2 huggingface-hub-0.17.3 
jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu12-12.1.3.1 
nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 
nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 
nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 
nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 
nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.2.140 
nvidia-nvtx-cu12-12.1.105 pillow-10.1.0 safetensors-0.4.0 sympy-1.12 
tokenizers-0.14.1 torch-2.1.0 torchvision-0.16.0 tqdm-4.66.1 
transformers-4.34.0 triton-2.1.0
WARNING: There was an error checking the latest version of pip.
INFO:root:Device is set to CUDA
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
WARNING:apache_beam.runners.portability.stager:When using a custom container 
image, prefer installing additional PyPI dependencies directly into the image, 
instead of specifying them via runtime options, such as --requirements_file. 
INFO:apache_beam.runners.portability.stager:Executing command: 
['<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/bin/python',>
 '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 
'/tmp/tmp_mxj46le/tmp_requirements.txt', '--exists-action', 'i', '--no-deps', 
'--implementation', 'cp', '--abi', 'cp38', '--platform', 'manylinux2014_x86_64']
INFO:apache_beam.runners.portability.stager:Copying Beam SDK 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/build/apache-beam.tar.gz";>
 to staging location.
INFO:apache_beam.runners.dataflow.dataflow_runner:Pipeline has additional 
dependencies to be installed in SDK **** container, consider using the SDK 
container image pre-building workflow to avoid repetitive installations. Learn 
more on 
https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild
INFO:root:Using provided Python SDK container image: 
us.gcr.io/apache-beam-testing/python-postcommit-it/tensor_rt:latest
INFO:root:Python SDK container image set to 
"us.gcr.io/apache-beam-testing/python-postcommit-it/tensor_rt:latest" for 
Docker environment
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function pack_combiners at 0x7fc269af00d0> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:====================
 <function sort_stages at 0x7fc269af08b0> ====================
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/requirements.txt...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/requirements.txt
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/mock-2.0.0-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/mock-2.0.0-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/seaborn-0.13.0-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/seaborn-0.13.0-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/PyHamcrest-1.10.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/PyHamcrest-1.10.1-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/transformers-4.34.0-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/transformers-4.34.0-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/beautifulsoup4-4.12.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/beautifulsoup4-4.12.2-py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/parameterized-0.7.5-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/parameterized-0.7.5-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/torch-2.1.0-cp38-cp38-manylinux1_x86_64.whl
 in 36 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/torchvision-0.16.0-cp38-cp38-manylinux1_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/google_cloud_aiplatform-1.34.0-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/google_cloud_aiplatform-1.34.0-py2.py3-none-any.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/Pillow-10.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/Pillow-10.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.7.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.7.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/tensorflow-2.13.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/tensorflow-2.13.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 24 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/matplotlib-3.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
 in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/dataflow_python_sdk.tar...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/dataflow_python_sdk.tar
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/pipeline.pb...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to 
gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python-gpu1018155107.1697659424.912691/pipeline.pb
 in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 clientRequestId: '20231018200344913966-3227'
 createTime: '2023-10-18T20:04:56.912398Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2023-10-18_13_04_55-15047146705502467952'
 location: 'us-central1'
 name: 'benchmark-tests-pytorch-imagenet-python-gpu1018155107'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2023-10-18T20:04:56.912398Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: 
[2023-10-18_13_04_55-15047146705502467952]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 
2023-10-18_13_04_55-15047146705502467952
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow 
monitoring console, please navigate to 
https://console.cloud.google.com/dataflow/jobs/us-central1/2023-10-18_13_04_55-15047146705502467952?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-18_13_04_55-15047146705502467952 is in state JOB_STATE_PENDING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:04:57.433Z: 
JOB_MESSAGE_BASIC: The pipeline is using shuffle service with a (boot) 
persistent disk size / type other than the default. If that configuration was 
intended solely to speed up the non-service shuffle, consider removing it to 
reduce costs as those disks are unused by the shuffle service.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:01.018Z: 
JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-f.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:03.961Z: 
JOB_MESSAGE_BASIC: Executing operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/EmitSource+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:03.987Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3759>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:04.022Z: 
JOB_MESSAGE_BASIC: Starting 75 ****s in us-central1-f...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-18_13_04_55-15047146705502467952 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:16.369Z: 
JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric 
descriptors, so new user metrics of the form custom.googleapis.com/* will not 
be created. However, all user metrics are also available in the metric 
dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, 
you can delete old / unused metric descriptors. See 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list
 and 
https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
WARNING:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:05:56.444Z: 
JOB_MESSAGE_WARNING: Autoscaling: Startup of the **** pool in zone 
us-central1-f reached 74 ****s, but the goal was 75 ****s. The service will 
retry. Please refer to 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-pool-failure 
for help troubleshooting. QUOTA_EXCEEDED: Instance 
'benchmark-tests-pytorch-i-10181304-urad-harness-czws' creation failed: Quota 
'NVIDIA_T4_GPUS' exceeded.  Limit: 48.0 in region us-central1.
WARNING:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:16:58.967Z: 
JOB_MESSAGE_WARNING: Autoscaling: Unable to reach resize target in zone 
us-central1-f. QUOTA_EXCEEDED: Instance 
'benchmark-tests-pytorch-i-10181304-urad-harness-p89b' creation failed: Quota 
'NVIDIA_T4_GPUS' exceeded.  Limit: 48.0 in region us-central1.
WARNING:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:18:18.970Z: 
JOB_MESSAGE_WARNING: Autoscaling: Unable to reach resize target in zone 
us-central1-f. QUOTA_EXCEEDED: Instance 
'benchmark-tests-pytorch-i-10181304-urad-harness-xjgc' creation failed: Quota 
'NVIDIA_T4_GPUS' exceeded.  Limit: 48.0 in region us-central1.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:23.767Z: 
JOB_MESSAGE_BASIC: All ****s have finished the startup processes and began to 
receive work requests.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.736Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda
 at 
core.py:3759>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.909Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.930Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.946Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.957Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:25.979Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:26.001Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:26.139Z: 
JOB_MESSAGE_BASIC: Finished operation 
ReadImageNames/Read/Impulse+ReadImageNames/Read/EmitSource+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:26.244Z: 
JOB_MESSAGE_BASIC: Executing operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:26.981Z: 
JOB_MESSAGE_BASIC: Finished operation 
WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:21:27.085Z: 
JOB_MESSAGE_BASIC: Executing operation 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:26:04.791Z: 
JOB_MESSAGE_BASIC: Finished operation 
ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
ERROR:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:26:04.878Z: 
JOB_MESSAGE_ERROR: Workflow failed. Causes: 
S06:ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
 failed., The job failed because a work item has failed 4 times. Look in 
previous log entries for the cause of each one of the 4 failures. If the logs 
only contain generic timeout errors related to accessing external resources, 
such as MongoDB, verify that the **** service account has permission to access 
the resource's subnetwork. For more information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was 
attempted on these ****s: 

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-t62k,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-429d,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-vg42,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-48m8
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:26:06.135Z: 
JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-10-18T20:28:27.333Z: 
JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 
2023-10-18_13_04_55-15047146705502467952 is in state JOB_STATE_FAILED
ERROR:apache_beam.runners.dataflow.dataflow_runner:Console URL: 
https://console.cloud.google.com/dataflow/jobs/<RegionId>/2023-10-18_13_04_55-15047146705502467952?project=<ProjectId>
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 68, in <module>
    PytorchVisionBenchmarkTest().run()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/load_test.py";,>
 line 148, in run
    self.test()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/benchmarks/inference/pytorch_image_classification_benchmarks.py";,>
 line 58, in test
    self.result = pytorch_image_classification.run(
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/examples/inference/pytorch_image_classification.py";,>
 line 166, in run
    result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py";,>
 line 771, in wait_until_finish
    raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow 
pipeline failed. State: FAILED, Error:
Workflow failed. Causes: 
S06:ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda
 at 
iobase.py:1144>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
 failed., The job failed because a work item has failed 4 times. Look in 
previous log entries for the cause of each one of the 4 failures. If the logs 
only contain generic timeout errors related to accessing external resources, 
such as MongoDB, verify that the **** service account has permission to access 
the resource's subnetwork. For more information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was 
attempted on these ****s: 

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-t62k,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-429d,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-vg42,

      Root cause: Timed out waiting for an update from the ****. For more 
information, see 
https://cloud.google.com/dataflow/docs/guides/common-errors#****-lost-contact.
      Worker ID: benchmark-tests-pytorch-i-10181304-urad-harness-48m8

> Task :sdks:python:apache_beam:testing:load_tests:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/apache_beam/testing/load_tests/build.gradle'>
 line: 63

* What went wrong:
Execution failed for task ':sdks:python:apache_beam:testing:load_tests:run'.
> error occurred

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Get more help at https://help.gradle.org.

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.3/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD FAILED in 34m 4s
9 actionable tasks: 4 executed, 5 up-to-date

Publishing build scan...
https://ge.apache.org/s/cnyebgmxluc4y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to