Issue 142450
Summary Building LLVM offloading needs (and can't find) ROCm
Labels new issue
Assignees
Reporter FlorianB-DE
    Hello everyone, 
I'm trying to get llvm OpenMP Offloading to work for nvidia GPUs in a container. 

This is the Dockerfile:
```Dockerfile
FROM nvidia/cuda:12.6.3-devel-ubuntu24.04

WORKDIR /root

# for ROCm (didn't work either)
# RUN apt update && apt install -y wget
# RUN wget https://repo.radeon.com/amdgpu-install/6.4.1/ubuntu/noble/amdgpu-install_6.4.60401-1_all.deb
# RUN apt install -y ./amdgpu-install_6.4.60401-1_all.deb
# RUN apt update && apt install -y python3-setuptools python3-wheel && \
#     rm -rf /var/lib/apt/lists/*
# RUN usermod -a -G render,video
# RUN apt update && apt install -y rocm libdrm-dev && \
#     rm -rf /var/lib/apt/lists/*

# Install llvm build dependencies.
RUN apt update && \
     apt install -y --no-install-recommends ca-certificates cmake 2to3 python-is-python3 \
 subversion ninja-build python3-yaml git && \
     rm -rf /var/lib/apt/lists/*

ADD https://github.com/llvm/llvm-project.git /llvm-project

RUN mkdir /llvm-project/build

WORKDIR /llvm-project/build

RUN cmake ../llvm -G Ninja                  \
   -C ../offload/cmake/caches/Offload.cmake \
   -DCMAKE_BUILD_TYPE=Release \
   -DCMAKE_INSTALL_PREFIX=/usr/local/llvm   \
 -DLLVM_TARGETS_TO_BUILD="X86;NVPTX"

RUN ninja install
```
For those following (gpu-offloading-docker-image)[https://discourse.llvm.org/t/gpu-offloading-docker-image/86656], this seems to be unique to Docker. 
The Provided Docker Image in /llvm/utils/docker doesn't work either (it's probably on an old version) as also discussed in the thread.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to