Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-18 Thread Szilárd Páll
Your suspicion might be valid, but it I'd prefer if you could verify this
through more standard means too; if you confirm that requesting dynamic
cudart linking is not honored, than there might be an issue in the GROMACS
build system.

BTW, on my binary I built with -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF I get
this:

$ objdump -x lib/libgromacs.so | grep cudaGetDevice
   F *UND*   cudaGetDevice@
@libcudart.so.9.2
   F *UND* 
cudaGetDeviceCount@@libcudart.so.9.2
   F *UND* 
cudaGetDeviceProperties@@libcudart.so.9.2

Which seems to be rigt AFAICT.

--
Szilárd


On Thu, Dec 13, 2018 at 7:08 PM Jaime Sierra  wrote:

> I suspect CUDA is not linked dinamically. I'm almost 100% sure.
>
> function cuGetExportTable not supported. Please, report this error to <
> supp...@rcuda.net> so that it is supported in future versions of rCUDA.
>
> this function is called when CUDA Runtime is compiled statically.
>
> The ld command is telling me that:
> libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
>
> and my environment variables are unset·
>
> Regards,
> Jaime.
>
>
> El jue., 13 dic. 2018 a las 18:27, Szilárd Páll ()
> escribió:
>
> > On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra  wrote:
> > >
> > > My cmake config:
> > >
> > > ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> > > -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> > > -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> > > -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> > > -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> > > -DCUDA_NVCC_FLAGS=--cudart=shared
> >
> > Why pass that flag when the abovecache variable should do the same?
> >
> > > -DGMX_PREFER_STATIC_LIBS=OFF
> > >
> > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
> > > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
> > > linux-vdso.so.1 =>  (0x7ffc6f6f4000)
> > > libgromacs.so.3 =>
> > >
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
> > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
> > > libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
> > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
> > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
> > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
> > > libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
> > > libcudart.so.8.0 =>
> > > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > (0x7fb587c37000)
> > > libcufft.so.8.0 =>
> > > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> > (0x7fb57ede9000)
> > > libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
> > > librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
> > > libmkl_intel_lp64.so =>
> > >
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> > > (0x7fb57e2b9000)
> > > libmkl_intel_thread.so =>
> > >
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> > > (0x7fb57d21e000)
> > > libmkl_core.so =>
> > >
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> > > (0x7fb57bcf)
> > > libiomp5.so =>
> > > /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> > > (0x7fb57b9d7000)
> > > libmkl_gf_lp64.so =>
> > >
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> > > (0x7fb57b2b4000)
> > > /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000
> > >
> > >
> > >
> > > IDK what i'm doing wrong.
> >
> > You asked for dynamic linking against the CUDA runtime and you got
> > that. Please be more specific what the problem is.
> >
> > --
> > Szilárd
> >
> > >
> > > Regards,
> > > Jaime.
> > >
> > > El mar., 11 dic. 2018 a las 22:14, Szilárd Páll (<
> pall.szil...@gmail.com
> > >)
> > > escribió:
> > >
> > > > AFAIK the right way to control RPATH using cmake is:
> > > > https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> > > > no need to poke the binary.
> > > >
> > > > If you still need to turn off static cudart linking the way to do
> that
> > > > is also via a CMake feature:
> > > > https://cmake.org/cmake/help/latest/module/FindCUDA.html
> > > > The default is static.
> > > >
> > > > --
> > > > Szilárd
> > > > On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra 
> > wrote:
> > > > >
> > > > > I'm trying to rewrite the RPATH because shared libraries paths used
> > by
> > > > > GROMACS are hardcoded in the binary.
> > > > >
> > > > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > > > > 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Jaime Sierra
I suspect CUDA is not linked dinamically. I'm almost 100% sure.

function cuGetExportTable not supported. Please, report this error to <
supp...@rcuda.net> so that it is supported in future versions of rCUDA.

this function is called when CUDA Runtime is compiled statically.

The ld command is telling me that:
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0

and my environment variables are unset·

Regards,
Jaime.


El jue., 13 dic. 2018 a las 18:27, Szilárd Páll ()
escribió:

> On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra  wrote:
> >
> > My cmake config:
> >
> > ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> > -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> > -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> > -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> > -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> > -DCUDA_NVCC_FLAGS=--cudart=shared
>
> Why pass that flag when the abovecache variable should do the same?
>
> > -DGMX_PREFER_STATIC_LIBS=OFF
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
> > linux-vdso.so.1 =>  (0x7ffc6f6f4000)
> > libgromacs.so.3 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
> > libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
> > libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
> > libcudart.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7fb587c37000)
> > libcufft.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> (0x7fb57ede9000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
> > librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
> > libmkl_intel_lp64.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> > (0x7fb57e2b9000)
> > libmkl_intel_thread.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> > (0x7fb57d21e000)
> > libmkl_core.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> > (0x7fb57bcf)
> > libiomp5.so =>
> > /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> > (0x7fb57b9d7000)
> > libmkl_gf_lp64.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> > (0x7fb57b2b4000)
> > /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000
> >
> >
> >
> > IDK what i'm doing wrong.
>
> You asked for dynamic linking against the CUDA runtime and you got
> that. Please be more specific what the problem is.
>
> --
> Szilárd
>
> >
> > Regards,
> > Jaime.
> >
> > El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ( >)
> > escribió:
> >
> > > AFAIK the right way to control RPATH using cmake is:
> > > https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> > > no need to poke the binary.
> > >
> > > If you still need to turn off static cudart linking the way to do that
> > > is also via a CMake feature:
> > > https://cmake.org/cmake/help/latest/module/FindCUDA.html
> > > The default is static.
> > >
> > > --
> > > Szilárd
> > > On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra 
> wrote:
> > > >
> > > > I'm trying to rewrite the RPATH because shared libraries paths used
> by
> > > > GROMACS are hardcoded in the binary.
> > > >
> > > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > > > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > > > libgromacs.so.2 =>
> > > >
> > >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > > > (0x7f0094b25000)
> > > > libcudart.so.8.0 => not found
> > > > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > > > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > > > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > > > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > > > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > > > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > > > libcudart.so.8.0 =>
> > > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > > > (0x7f0092c5)
> > > > 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Szilárd Páll
On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra  wrote:
>
> My cmake config:
>
> ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> -DCUDA_NVCC_FLAGS=--cudart=shared

Why pass that flag when the abovecache variable should do the same?

> -DGMX_PREFER_STATIC_LIBS=OFF
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
> linux-vdso.so.1 =>  (0x7ffc6f6f4000)
> libgromacs.so.3 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
> libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
> libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
> libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0 (0x7fb587c37000)
> libcufft.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0 (0x7fb57ede9000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
> librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
> libmkl_intel_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> (0x7fb57e2b9000)
> libmkl_intel_thread.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> (0x7fb57d21e000)
> libmkl_core.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> (0x7fb57bcf)
> libiomp5.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> (0x7fb57b9d7000)
> libmkl_gf_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> (0x7fb57b2b4000)
> /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000
>
>
>
> IDK what i'm doing wrong.

You asked for dynamic linking against the CUDA runtime and you got
that. Please be more specific what the problem is.

--
Szilárd

>
> Regards,
> Jaime.
>
> El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ()
> escribió:
>
> > AFAIK the right way to control RPATH using cmake is:
> > https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> > no need to poke the binary.
> >
> > If you still need to turn off static cudart linking the way to do that
> > is also via a CMake feature:
> > https://cmake.org/cmake/help/latest/module/FindCUDA.html
> > The default is static.
> >
> > --
> > Szilárd
> > On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
> > >
> > > I'm trying to rewrite the RPATH because shared libraries paths used by
> > > GROMACS are hardcoded in the binary.
> > >
> > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > > libgromacs.so.2 =>
> > >
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > > (0x7f0094b25000)
> > > libcudart.so.8.0 => not found
> > > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > > libcudart.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > > (0x7f0092c5)
> > > /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
> > >
> > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > > linux-vdso.so.1 =>  (0x7fff27b8d000)
> > > libgromacs.so.3 =>
> > >
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> > > (0x7fcb4aa3e000)
> > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> > > libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> > > libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> > > libcudart.so.8.0 =>
> > 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Jaime Sierra
My cmake config:

~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
-DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
-DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
-DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
linux-vdso.so.1 =>  (0x7ffc6f6f4000)
libgromacs.so.3 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
libcudart.so.8.0 =>
/nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0 (0x7fb587c37000)
libcufft.so.8.0 =>
/nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0 (0x7fb57ede9000)
libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
libmkl_intel_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
(0x7fb57e2b9000)
libmkl_intel_thread.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
(0x7fb57d21e000)
libmkl_core.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
(0x7fb57bcf)
libiomp5.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
(0x7fb57b9d7000)
libmkl_gf_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
(0x7fb57b2b4000)
/lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000



IDK what i'm doing wrong.

Regards,
Jaime.

El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ()
escribió:

> AFAIK the right way to control RPATH using cmake is:
> https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> no need to poke the binary.
>
> If you still need to turn off static cudart linking the way to do that
> is also via a CMake feature:
> https://cmake.org/cmake/help/latest/module/FindCUDA.html
> The default is static.
>
> --
> Szilárd
> On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
> >
> > I'm trying to rewrite the RPATH because shared libraries paths used by
> > GROMACS are hardcoded in the binary.
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > libgromacs.so.2 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > (0x7f0094b25000)
> > libcudart.so.8.0 => not found
> > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > (0x7f0092c5)
> > /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > linux-vdso.so.1 =>  (0x7fff27b8d000)
> > libgromacs.so.3 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> > (0x7fcb4aa3e000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> > libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> > libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> > libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > (0x7fcb4979c000)
> > libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> > (0x7fcb4094e000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
> > librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
> > libfftw3f.so.3 =>
> > /nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
> > (0x7fcb401c8000)
> > libmkl_intel_lp64.so =>
> >
> 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-11 Thread Szilárd Páll
AFAIK the right way to control RPATH using cmake is:
https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
no need to poke the binary.

If you still need to turn off static cudart linking the way to do that
is also via a CMake feature:
https://cmake.org/cmake/help/latest/module/FindCUDA.html
The default is static.

--
Szilárd
On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
>
> I'm trying to rewrite the RPATH because shared libraries paths used by
> GROMACS are hardcoded in the binary.
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> linux-vdso.so.1 =>  (0x7ffddf1d3000)
> libgromacs.so.2 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> (0x7f0094b25000)
> libcudart.so.8.0 => not found
> libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7f0092c5)
> /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> linux-vdso.so.1 =>  (0x7fff27b8d000)
> libgromacs.so.3 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> (0x7fcb4aa3e000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7fcb4979c000)
> libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> (0x7fcb4094e000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
> librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
> libfftw3f.so.3 =>
> /nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
> (0x7fcb401c8000)
> libmkl_intel_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> (0x7fcb3faa4000)
> libmkl_intel_thread.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> (0x7fcb3ea0a000)
> libmkl_core.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> (0x7fcb3d4dc000)
> libiomp5.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> (0x7fcb3d1c2000)
> libmkl_gf_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> (0x7fcb3caa)
> /lib64/ld-linux-x86-64.so.2 (0x7fcb4d785000)
>
> See the differences between the 2016 & 2018 version.
>
> I'm using Cmake 3.13.1.
>
> ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> -DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF
> -DEXTRA_NVCCFLAGS=--cudart=shared
>
> I think I've tried almost everything.
>
> Regards.
>
> El lun., 10 dic. 2018 a las 16:09, Szilárd Páll ()
> escribió:
>
> > On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
> > >
> > > My mistake! It was a typo. Anyway, this is the result before executing
> > > the chrpath command:
> > >
> > > chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > > $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
> > >
> > > I'm suspicious that GROMACS 2018 is not being compiled using shared
> > > libraries, at least, for CUDA.
> >
> > First of all, what is the goal, why are you trying to manually rewrite
> > the binary RPATH?
> >
> > Well, if the binaries not linked against libcudart.so than it clearly
> > isn't (and the ldd output is a better way to confirm that -- a library
> > can be linked against gmx even without an RPATH being set).
> >
> > I have a vague memory that this may have been the default in CMake or
> > perhaps it changed at some point. What's your CMake version, perhaps
> > you're using an old CMake?
> >
> > >
> > > Jaime.
> > >
> > >
> > > On 8/12/18 21:39, Mark Abraham wrote:
> > > > Hi,
> > > >
> > > > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> > > >
> > > > Mark
> > > 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-11 Thread Jaime Sierra
I'm trying to rewrite the RPATH because shared libraries paths used by
GROMACS are hardcoded in the binary.

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
linux-vdso.so.1 =>  (0x7ffddf1d3000)
libgromacs.so.2 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
(0x7f0094b25000)
libcudart.so.8.0 => not found
libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
(0x7f0092c5)
/lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
linux-vdso.so.1 =>  (0x7fff27b8d000)
libgromacs.so.3 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
(0x7fcb4aa3e000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
(0x7fcb4979c000)
libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
(0x7fcb4094e000)
libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
libfftw3f.so.3 =>
/nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
(0x7fcb401c8000)
libmkl_intel_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
(0x7fcb3faa4000)
libmkl_intel_thread.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
(0x7fcb3ea0a000)
libmkl_core.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
(0x7fcb3d4dc000)
libiomp5.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
(0x7fcb3d1c2000)
libmkl_gf_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
(0x7fcb3caa)
/lib64/ld-linux-x86-64.so.2 (0x7fcb4d785000)

See the differences between the 2016 & 2018 version.

I'm using Cmake 3.13.1.

~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
-DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
-DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
-DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF
-DEXTRA_NVCCFLAGS=--cudart=shared

I think I've tried almost everything.

Regards.

El lun., 10 dic. 2018 a las 16:09, Szilárd Páll ()
escribió:

> On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
> >
> > My mistake! It was a typo. Anyway, this is the result before executing
> > the chrpath command:
> >
> > chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
> >
> > I'm suspicious that GROMACS 2018 is not being compiled using shared
> > libraries, at least, for CUDA.
>
> First of all, what is the goal, why are you trying to manually rewrite
> the binary RPATH?
>
> Well, if the binaries not linked against libcudart.so than it clearly
> isn't (and the ldd output is a better way to confirm that -- a library
> can be linked against gmx even without an RPATH being set).
>
> I have a vague memory that this may have been the default in CMake or
> perhaps it changed at some point. What's your CMake version, perhaps
> you're using an old CMake?
>
> >
> > Jaime.
> >
> >
> > On 8/12/18 21:39, Mark Abraham wrote:
> > > Hi,
> > >
> > > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> > >
> > > Mark
> > >
> > > On Sun., 9 Dec. 2018, 07:00 Jaime Sierra  > >
> > >> Hi pall,
> > >>
> > >> thanks for your answer,
> > >> I have my own "HOW_TO_INSTALL" guide like:
> > >>
> > >> $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
> > >> $ tar xzf gromacs-5.1.4.tar.gz
> > >> $ cd gromacs-5.1.4.tar.gz
> > >> $ mkdir build
> > >> $ cd build
> > >> $ export EXTRA_NVCCFLAGS=--cudart=shared
> > >> $ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
> > >> $ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
> > >> -DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
> > >> 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-10 Thread Szilárd Páll
On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
>
> My mistake! It was a typo. Anyway, this is the result before executing
> the chrpath command:
>
> chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
>
> I'm suspicious that GROMACS 2018 is not being compiled using shared
> libraries, at least, for CUDA.

First of all, what is the goal, why are you trying to manually rewrite
the binary RPATH?

Well, if the binaries not linked against libcudart.so than it clearly
isn't (and the ldd output is a better way to confirm that -- a library
can be linked against gmx even without an RPATH being set).

I have a vague memory that this may have been the default in CMake or
perhaps it changed at some point. What's your CMake version, perhaps
you're using an old CMake?

>
> Jaime.
>
>
> On 8/12/18 21:39, Mark Abraham wrote:
> > Hi,
> >
> > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> >
> > Mark
> >
> > On Sun., 9 Dec. 2018, 07:00 Jaime Sierra  >
> >> Hi pall,
> >>
> >> thanks for your answer,
> >> I have my own "HOW_TO_INSTALL" guide like:
> >>
> >> $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
> >> $ tar xzf gromacs-5.1.4.tar.gz
> >> $ cd gromacs-5.1.4.tar.gz
> >> $ mkdir build
> >> $ cd build
> >> $ export EXTRA_NVCCFLAGS=--cudart=shared
> >> $ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
> >> $ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
> >> -DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
> >> -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/
> >> $ make -j $(nproc)
> >> $ make install
> >> $ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx
> >>
> >> that works until GROMACS 2016, I couldn't make it work for GROMACS 2018.
> >>
> >> Regards,
> >>
> >> Jaime.
> >>
> >> El vie., 7 dic. 2018 a las 15:49, Szilárd Páll ()
> >> escribió:
> >>
> >>> Hi Jaime,
> >>>
> >>> Have you tried passing that variable to nvcc? Does it not work?
> >>>
> >>> Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
> >>> and transfers) per iteration with iteration times in the range of
> >>> milliseconds at longest and peak in the hundreds of nanoseconds and
> >>> the CPU needs to sync up every iteration with the GPU. Hence, I
> >>> suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
> >>> interested in your observations and benchmarks results when you have
> >>> some.
> >>>
> >>> Cheers,
> >>> On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:
>  Hi,
> 
>  my name is Jaime Sierra, a researcher from Polytechnic University of
>  Valencia, Spain. I would like to know how to compile & install GROMACS
>  2018 with CUDA features with the "--cudart=shared" compilation option
> >> to
>  use it with our rCUDA software.
> 
> 
>  We haven't had this problem in previous releases of GROMACS like 2016,
>  5.1.4 and so on.
> 
> 
>  Regards,
> 
>  Jaime.
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-requ...@gromacs.org.
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-requ...@gromacs.org.
> >>>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-08 Thread Gmail
My mistake! It was a typo. Anyway, this is the result before executing 
the chrpath command:


chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
$APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64

I'm suspicious that GROMACS 2018 is not being compiled using shared 
libraries, at least, for CUDA.


Jaime.


On 8/12/18 21:39, Mark Abraham wrote:

Hi,

Your final line doesn't match your CMAKE_INSTALL_PREFIX

Mark

On Sun., 9 Dec. 2018, 07:00 Jaime Sierra 
Hi pall,

thanks for your answer,
I have my own "HOW_TO_INSTALL" guide like:

$ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
$ tar xzf gromacs-5.1.4.tar.gz
$ cd gromacs-5.1.4.tar.gz
$ mkdir build
$ cd build
$ export EXTRA_NVCCFLAGS=--cudart=shared
$ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
$ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
-DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
-DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/
$ make -j $(nproc)
$ make install
$ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx

that works until GROMACS 2016, I couldn't make it work for GROMACS 2018.

Regards,

Jaime.

El vie., 7 dic. 2018 a las 15:49, Szilárd Páll ()
escribió:


Hi Jaime,

Have you tried passing that variable to nvcc? Does it not work?

Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
and transfers) per iteration with iteration times in the range of
milliseconds at longest and peak in the hundreds of nanoseconds and
the CPU needs to sync up every iteration with the GPU. Hence, I
suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
interested in your observations and benchmarks results when you have
some.

Cheers,
On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:

Hi,

my name is Jaime Sierra, a researcher from Polytechnic University of
Valencia, Spain. I would like to know how to compile & install GROMACS
2018 with CUDA features with the "--cudart=shared" compilation option

to

use it with our rCUDA software.


We haven't had this problem in previous releases of GROMACS like 2016,
5.1.4 and so on.


Regards,

Jaime.
--
Gromacs Users mailing list

* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send a mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-08 Thread Mark Abraham
Hi,

Your final line doesn't match your CMAKE_INSTALL_PREFIX

Mark

On Sun., 9 Dec. 2018, 07:00 Jaime Sierra  Hi pall,
>
> thanks for your answer,
> I have my own "HOW_TO_INSTALL" guide like:
>
> $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
> $ tar xzf gromacs-5.1.4.tar.gz
> $ cd gromacs-5.1.4.tar.gz
> $ mkdir build
> $ cd build
> $ export EXTRA_NVCCFLAGS=--cudart=shared
> $ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
> $ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
> -DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
> -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/
> $ make -j $(nproc)
> $ make install
> $ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx
>
> that works until GROMACS 2016, I couldn't make it work for GROMACS 2018.
>
> Regards,
>
> Jaime.
>
> El vie., 7 dic. 2018 a las 15:49, Szilárd Páll ()
> escribió:
>
> > Hi Jaime,
> >
> > Have you tried passing that variable to nvcc? Does it not work?
> >
> > Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
> > and transfers) per iteration with iteration times in the range of
> > milliseconds at longest and peak in the hundreds of nanoseconds and
> > the CPU needs to sync up every iteration with the GPU. Hence, I
> > suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
> > interested in your observations and benchmarks results when you have
> > some.
> >
> > Cheers,
> > On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:
> > >
> > > Hi,
> > >
> > > my name is Jaime Sierra, a researcher from Polytechnic University of
> > > Valencia, Spain. I would like to know how to compile & install GROMACS
> > > 2018 with CUDA features with the "--cudart=shared" compilation option
> to
> > > use it with our rCUDA software.
> > >
> > >
> > > We haven't had this problem in previous releases of GROMACS like 2016,
> > > 5.1.4 and so on.
> > >
> > >
> > > Regards,
> > >
> > > Jaime.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-08 Thread Jaime Sierra
Hi pall,

thanks for your answer,
I have my own "HOW_TO_INSTALL" guide like:

$ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
$ tar xzf gromacs-5.1.4.tar.gz
$ cd gromacs-5.1.4.tar.gz
$ mkdir build
$ cd build
$ export EXTRA_NVCCFLAGS=--cudart=shared
$ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
$ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
-DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
-DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/
$ make -j $(nproc)
$ make install
$ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx

that works until GROMACS 2016, I couldn't make it work for GROMACS 2018.

Regards,

Jaime.

El vie., 7 dic. 2018 a las 15:49, Szilárd Páll ()
escribió:

> Hi Jaime,
>
> Have you tried passing that variable to nvcc? Does it not work?
>
> Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
> and transfers) per iteration with iteration times in the range of
> milliseconds at longest and peak in the hundreds of nanoseconds and
> the CPU needs to sync up every iteration with the GPU. Hence, I
> suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
> interested in your observations and benchmarks results when you have
> some.
>
> Cheers,
> On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:
> >
> > Hi,
> >
> > my name is Jaime Sierra, a researcher from Polytechnic University of
> > Valencia, Spain. I would like to know how to compile & install GROMACS
> > 2018 with CUDA features with the "--cudart=shared" compilation option to
> > use it with our rCUDA software.
> >
> >
> > We haven't had this problem in previous releases of GROMACS like 2016,
> > 5.1.4 and so on.
> >
> >
> > Regards,
> >
> > Jaime.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-07 Thread Szilárd Páll
Hi Jaime,

Have you tried passing that variable to nvcc? Does it not work?

Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
and transfers) per iteration with iteration times in the range of
milliseconds at longest and peak in the hundreds of nanoseconds and
the CPU needs to sync up every iteration with the GPU. Hence, I
suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
interested in your observations and benchmarks results when you have
some.

Cheers,
On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:
>
> Hi,
>
> my name is Jaime Sierra, a researcher from Polytechnic University of
> Valencia, Spain. I would like to know how to compile & install GROMACS
> 2018 with CUDA features with the "--cudart=shared" compilation option to
> use it with our rCUDA software.
>
>
> We haven't had this problem in previous releases of GROMACS like 2016,
> 5.1.4 and so on.
>
>
> Regards,
>
> Jaime.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-07 Thread Jaime Sierra
Hi,

my name is Jaime Sierra, a researcher from Polytechnic University of
Valencia, Spain. I would like to know how to compile & install GROMACS
2018 with CUDA features with the "--cudart=shared" compilation option to
use it with our rCUDA software.


We haven't had this problem in previous releases of GROMACS like 2016,
5.1.4 and so on.


Regards,

Jaime.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.