Javier, Yes, I also have the libnvidia-ml.so library in /usr/lib64/nvidia. The patch seem to point to a custom path. It would probably be worth hearing Ake.
Usually codes that use CUDA can be compiled in a node that does not have GPUs, as long as CUDA is available for linking. In this case though, since it needs the NVML libraries which are shipped with the driver, I suspect you will have to compile on the GPU node since the drivers should not be installed without any GPU. -- Davide Vanzo, PhD Application Developer Adjunct Assistant Professor of Chemical and Biomolecular Engineering Advanced Computing Center for Research and Education (ACCRE) Vanderbilt University - Hill Center 201 (615)-875-9137 www.accre.vanderbilt.edu On Jul 12 2017, at 3:23 pm, Javier Antonio Ruiz Bosch <jrbo...@uclv.cu> wrote: Hi, easybuilders. My HPC has a GPU node that consists of 2 Nvidia K80 GPU cards. I recently need to install GROMACS to run it on that GPU node. I searched a few days ago and I did not found easyconfig of GROMACS for GPU so I prepared one using the toolchain foss and as a dependency I added cuda. Four days ago, akesandgren (github) added an easyconfig of GROMACS for GPU using the toolchain goolfc, so can I also install it using that easyconfig perhaps making some changes to the. patch file because my nvidia libraries are in user/lib64/nvidia not in user/lib/nvidia-367 not in user/lib/nvidia-375? If at the time of starting the installation with easybuild should I do it from the server master or from the GPU node so the installation process can detects the GPU cards and use their drivers and library that are only in this GPU node. Or there is another way to do it. Another question is how can I see the outputs of the cmake at the configuration step even when all is fine, it is only written to the log when the installation fails, in case it doesn’t fail this output is not written to the log. Regards. Javier.