Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Borchert, Christopher B ERDC-RDE-ITL-MS Contractor
Unfortunately using your shortened cmake args, I still get fpic errors. But it 
does complete the build statically with -DGMX_BUILD_SHARED_EXE=OFF

CC=cc CXX=CC cmake ../ -DGMX_SIMD=AVX2_256 -DGMX_MPI=ON -DGMX_GPU=ON 
-DCMAKE_PREFIX_PATH=${FFTW_DIR}/..

/usr/bin/ld: 
CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
 relocation R_X86_64_32 against 
`_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
 can not be used when making a shared object; recompile with -fPIC

Thanks,
Chris

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd 
Páll
Sent: Friday, April 06, 2018 2:40 PM
To: Discussion list for GROMACS users 
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Problem with CUDA

On CSCS Piz Daint I use the following command line (assuming
PrgEnv-gnu) where everything in "[]" is optional and compilation should work 
just fine without.

CC=cc CXX=CC cmake ../
-DGMX_SIMD=THE_RIGHT_SIMD_FLAVOR
-DGMX_MPI=ON
-DGMX_GPU=ON
-DCMAKE_PREFIX_PATH=${FFTW_DIR}/.. \
[ -DGMX_FFT_LIBRARY=fftw3
-DGMX_CUDA_TARGET_SM=60
-DGMX_PREFER_STATIC_LIBS=ON
-DBUILD_SHARED_LIBS=OFF
-DGMX_BUILD_MDRUN_ONLY=ON
-DGMX_EXTERNAL_BLAS=OFF -DGMX_EXTERNAL_LAPACK=OFF ]

In fact, other than the "-DCMAKE_PREFIX_PATH=${FFTW_DIR}/.."and
setting the right compiler wrappers, the rest is most of the time unnecessary 
for a "vanilla" build.

Cheers,
--
Szilárd


On Fri, Apr 6, 2018 at 8:32 PM, Borchert, Christopher B ERDC-RDE-ITL-MS 
Contractor 
wrote:
> You are trying to give me a hint. :) My cmake args are taken from a 
> co-worker, and the statement syntax is from the CMakeCache.txt file. On a 
> Cray you force cc/CC and all the module libraries/headers should be 
> automatically found. Strangely it didn’t find fftw without help. Regardless I 
> get the fpic error. But you've given me a path to investigate. Thanks.
>
> cmake ..
> -DGMX_GPU=ON
> -DCMAKE_C_COMPILER:FILEPATH=`which cc` 
> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC`
> -DGMX_FFT_LIBRARY=fftw3
> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>
> /usr/bin/ld: 
> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_
> cuda.cu.o: relocation R_X86_64_32 against 
> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_at
> omdata10cu_nbparam8cu_plistb' can not be used when making a shared 
> object; recompile with -fPIC
>
> Chris
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf 
> Of Szilárd Páll
> Sent: Friday, April 06, 2018 12:05 PM
> To: Discussion list for GROMACS users 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Problem with CUDA
>
> FYI: not even a my vanilla (non-CRAY) local build which does work otherwise 
> succeeds with cmake . -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
> so as I guessed, that's the culprit.
>
> Out of curiosity I wonder: what's the reason for the "inventive" use of CMake 
> options, none of which are needed?
>
> --
> Szilárd
>
>
> On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll  wrote:
>> I think the fpic errors can't be caused by missing rdc=true because 
>> the latter refers to the GPU _device_ code, but GROMACS does not need 
>> relocatable device code, so that should not be necessary.
>> --
>> Szilárd
>>
>>
>> On Fri, Apr 6, 2018 at 6:33 PM, Borchert, Christopher B 
>> ERDC-RDE-ITL-MS Contractor 
>> wrote:
>>> Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
>>> fpic errors without it. In fact I get fpic errors without including fpic 
>>> explicitly in the C/CXX flags.
>>>
>>> /usr/bin/ld:
>>> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnx
>>> n
>>> _cuda.cu.o: relocation R_X86_64_32 against 
>>> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_
>>> a tomdata10cu_nbparam8cu_plistb' can not be used when making a 
>>> shared object; recompile with -fPIC
>>>
>>> So I removed boost, avx2, mpi, and dynamic but get the same result. What 
>>> else should I remove?
>>>
>>> cmake ..
>>> -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE
>>> -DCMAKE_C_COMPILER:FILEPATH=`which cc` -DCMAKE_C_FLAGS:STRING=-fPIC 
>>> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC` 
>>> -DCMAKE_CXX_FLAGS:STRING=-fPIC -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX
>>> -DGMX_FFT_LIBRARY=fftw3
>>> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
>>> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>>> -DGMX_GPU=ON
>>> -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
>>>
>>> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   
>>> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast   
>>> 

[gmx-users] GROMACS 2018 MDRun: Multiple Ranks/GPU Issue

2018-04-06 Thread Hollingsworth, Bobby
Hello all,

I'm tuning MDrun on a node with 24 intel skylake cores (2X12) and 2 V100
GPUs. MPI-enabled (no thread-mpi) Gromacs 2018 ("2018.0") is compiled with
GCC, CUDA, OpenMPI, and OpenBLAS. I am trying to assign the two GPUs to
four ranks. My run command is:

mpirun -np 4 mdrun_mpi_s_g -ntomp 6 -pme cpu -nb gpu -gputasks 0011 -deffnm
test_2018

However, I'm getting the error:

Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
  PP:0,PP:0,PP:1,PP:1

NOTE: You assigned the same GPU ID(s) to multiple ranks, which is a good
idea if you have measured the performance of alternatives.

---
Program: mdrun_mpi_s_g, version 2018
Source file: src/gromacs/gpu_utils/gpu_utils.cu (line 127)
MPI rank:1 (out of 4)

Fatal error:
cudaFuncGetAttributes failed: all CUDA-capable devices are busy or
unavailable


The launch configuration works with -np 2 and -ntomp 12. Presumably, there
is an issue with GPUs being split across ranks. Any advice here? Thanks!

Best,
Bobby
-- 
Louis "Bobby" Hollingsworth
Ph.D. Student, Biological and Biomedical Sciences, Harvard University
B.S. Chemical Engineering, B.S. Biochemistry, B.A. Chemistry, Virginia Tech
Honors College '17

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
On CSCS Piz Daint I use the following command line (assuming
PrgEnv-gnu) where everything in "[]" is optional and compilation
should work just fine without.

CC=cc CXX=CC cmake ../
-DGMX_SIMD=THE_RIGHT_SIMD_FLAVOR
-DGMX_MPI=ON
-DGMX_GPU=ON
-DCMAKE_PREFIX_PATH=${FFTW_DIR}/.. \
[ -DGMX_FFT_LIBRARY=fftw3
-DGMX_CUDA_TARGET_SM=60
-DGMX_PREFER_STATIC_LIBS=ON
-DBUILD_SHARED_LIBS=OFF
-DGMX_BUILD_MDRUN_ONLY=ON
-DGMX_EXTERNAL_BLAS=OFF -DGMX_EXTERNAL_LAPACK=OFF ]

In fact, other than the "-DCMAKE_PREFIX_PATH=${FFTW_DIR}/.."and
setting the right compiler wrappers, the rest is most of the time
unnecessary for a "vanilla" build.

Cheers,
--
Szilárd


On Fri, Apr 6, 2018 at 8:32 PM, Borchert, Christopher B
ERDC-RDE-ITL-MS Contractor 
wrote:
> You are trying to give me a hint. :) My cmake args are taken from a 
> co-worker, and the statement syntax is from the CMakeCache.txt file. On a 
> Cray you force cc/CC and all the module libraries/headers should be 
> automatically found. Strangely it didn’t find fftw without help. Regardless I 
> get the fpic error. But you've given me a path to investigate. Thanks.
>
> cmake ..
> -DGMX_GPU=ON
> -DCMAKE_C_COMPILER:FILEPATH=`which cc`
> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC`
> -DGMX_FFT_LIBRARY=fftw3
> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>
> /usr/bin/ld: 
> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
>  relocation R_X86_64_32 against 
> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
>  can not be used when making a shared object; recompile with -fPIC
>
> Chris
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
> Szilárd Páll
> Sent: Friday, April 06, 2018 12:05 PM
> To: Discussion list for GROMACS users 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Problem with CUDA
>
> FYI: not even a my vanilla (non-CRAY) local build which does work otherwise 
> succeeds with cmake . -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
> so as I guessed, that's the culprit.
>
> Out of curiosity I wonder: what's the reason for the "inventive" use of CMake 
> options, none of which are needed?
>
> --
> Szilárd
>
>
> On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll  wrote:
>> I think the fpic errors can't be caused by missing rdc=true because
>> the latter refers to the GPU _device_ code, but GROMACS does not need
>> relocatable device code, so that should not be necessary.
>> --
>> Szilárd
>>
>>
>> On Fri, Apr 6, 2018 at 6:33 PM, Borchert, Christopher B
>> ERDC-RDE-ITL-MS Contractor 
>> wrote:
>>> Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
>>> fpic errors without it. In fact I get fpic errors without including fpic 
>>> explicitly in the C/CXX flags.
>>>
>>> /usr/bin/ld:
>>> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn
>>> _cuda.cu.o: relocation R_X86_64_32 against
>>> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_a
>>> tomdata10cu_nbparam8cu_plistb' can not be used when making a shared
>>> object; recompile with -fPIC
>>>
>>> So I removed boost, avx2, mpi, and dynamic but get the same result. What 
>>> else should I remove?
>>>
>>> cmake ..
>>> -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE
>>> -DCMAKE_C_COMPILER:FILEPATH=`which cc` -DCMAKE_C_FLAGS:STRING=-fPIC
>>> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC`
>>> -DCMAKE_CXX_FLAGS:STRING=-fPIC -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX
>>> -DGMX_FFT_LIBRARY=fftw3
>>> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
>>> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>>> -DGMX_GPU=ON
>>> -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
>>>
>>> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   
>>> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast   
>>> CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
>>> -Wl,-rpath,/p/work/borchert/gromacs-2018.1/build/lib 
>>> ../../lib/libgromacs.so.3.1.0 -fopenmp -lm
>>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>>> `__cudaRegisterLinkedBinary_59_tmpxft_01e3__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
>>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>>> `__cudaRegisterLinkedBinary_57_tmpxft_a64f__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
>>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>>> `__cudaRegisterLinkedBinary_71_tmpxft_03a4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
>>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>>> `__cudaRegisterLinkedBinary_58_tmpxft_a80b__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
>>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>>> 

[gmx-users] gmx select and gmx trjconv or gmx density

2018-04-06 Thread Dan Gil
Hi,

I am trying to select particles that are z < 20 and z > 10 as a function of
time.

I think I used gmx select correctly and generated an index file with group
names like "(z_<_.8_f2882_t23056.000)"

Now I am unsure how I use this index file for gmx trjconv or gmx density.
Does anybody have some advice?

Thanks in advance.

Best Regards,

Dan Gil
PhD Student
Case Western Reserve University
Department of Chemical and Biomolecular Engineering
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Borchert, Christopher B ERDC-RDE-ITL-MS Contractor
You are trying to give me a hint. :) My cmake args are taken from a co-worker, 
and the statement syntax is from the CMakeCache.txt file. On a Cray you force 
cc/CC and all the module libraries/headers should be automatically found. 
Strangely it didn’t find fftw without help. Regardless I get the fpic error. 
But you've given me a path to investigate. Thanks.

cmake .. 
-DGMX_GPU=ON  
-DCMAKE_C_COMPILER:FILEPATH=`which cc` 
-DCMAKE_CXX_COMPILER:FILEPATH=`which CC`  
-DGMX_FFT_LIBRARY=fftw3  
-DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so  
-DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC

/usr/bin/ld: 
CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
 relocation R_X86_64_32 against 
`_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
 can not be used when making a shared object; recompile with -fPIC

Chris

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd 
Páll
Sent: Friday, April 06, 2018 12:05 PM
To: Discussion list for GROMACS users 
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Problem with CUDA

FYI: not even a my vanilla (non-CRAY) local build which does work otherwise 
succeeds with cmake . -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
so as I guessed, that's the culprit.

Out of curiosity I wonder: what's the reason for the "inventive" use of CMake 
options, none of which are needed?

--
Szilárd


On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll  wrote:
> I think the fpic errors can't be caused by missing rdc=true because 
> the latter refers to the GPU _device_ code, but GROMACS does not need 
> relocatable device code, so that should not be necessary.
> --
> Szilárd
>
>
> On Fri, Apr 6, 2018 at 6:33 PM, Borchert, Christopher B 
> ERDC-RDE-ITL-MS Contractor 
> wrote:
>> Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
>> fpic errors without it. In fact I get fpic errors without including fpic 
>> explicitly in the C/CXX flags.
>>
>> /usr/bin/ld: 
>> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn
>> _cuda.cu.o: relocation R_X86_64_32 against 
>> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_a
>> tomdata10cu_nbparam8cu_plistb' can not be used when making a shared 
>> object; recompile with -fPIC
>>
>> So I removed boost, avx2, mpi, and dynamic but get the same result. What 
>> else should I remove?
>>
>> cmake ..
>> -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE
>> -DCMAKE_C_COMPILER:FILEPATH=`which cc` -DCMAKE_C_FLAGS:STRING=-fPIC 
>> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC` 
>> -DCMAKE_CXX_FLAGS:STRING=-fPIC -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX
>> -DGMX_FFT_LIBRARY=fftw3
>> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
>> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>> -DGMX_GPU=ON
>> -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
>>
>> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   
>> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast   
>> CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
>> -Wl,-rpath,/p/work/borchert/gromacs-2018.1/build/lib 
>> ../../lib/libgromacs.so.3.1.0 -fopenmp -lm
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_59_tmpxft_01e3__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_57_tmpxft_a64f__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_71_tmpxft_03a4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_58_tmpxft_a80b__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_a10b__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_57_tmpxft_9bd7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_50_tmpxft_a9c8__21_pme_compute_61_cpp1_ii_6dbf966c'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_a490__21_pme_solve_compute_61_cpp1_ii_06051a94'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_ab85__21_cudautils_compute_61_cpp1_ii_25933dd5'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_54_tmpxft_aefc__21_pinning_compute_61_cpp1_ii_5d0f4aae'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> 

[gmx-users] using constant velocities during simulation

2018-04-06 Thread Qasim Pars
Dear users,

I would like to use the constant velocities for all the atoms of both protein 
and ligand during all the simulation steps (EM, NVT, NPT and MD). Do you know 
how I can do that with GROMACS? In this case, the COM, the linear momentum and 
angular momentum of both groups wouldn't change during the simulation and the 
internal motions still would retain. However, using constant velocities won't 
alter the statistical consistency of the simulations...

Thanks.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
FYI: not even a my vanilla (non-CRAY) local build which does work
otherwise succeeds with
cmake . -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
so as I guessed, that's the culprit.

Out of curiosity I wonder: what's the reason for the "inventive" use
of CMake options, none of which are needed?

--
Szilárd


On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll  wrote:
> I think the fpic errors can't be caused by missing rdc=true because
> the latter refers to the GPU _device_ code, but GROMACS does not need
> relocatable device code, so that should not be necessary.
> --
> Szilárd
>
>
> On Fri, Apr 6, 2018 at 6:33 PM, Borchert, Christopher B
> ERDC-RDE-ITL-MS Contractor 
> wrote:
>> Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
>> fpic errors without it. In fact I get fpic errors without including fpic 
>> explicitly in the C/CXX flags.
>>
>> /usr/bin/ld: 
>> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
>>  relocation R_X86_64_32 against 
>> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
>>  can not be used when making a shared object; recompile with -fPIC
>>
>> So I removed boost, avx2, mpi, and dynamic but get the same result. What 
>> else should I remove?
>>
>> cmake ..
>> -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE
>> -DCMAKE_C_COMPILER:FILEPATH=`which cc`
>> -DCMAKE_C_FLAGS:STRING=-fPIC
>> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC`
>> -DCMAKE_CXX_FLAGS:STRING=-fPIC
>> -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX
>> -DGMX_FFT_LIBRARY=fftw3
>> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
>> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
>> -DGMX_GPU=ON
>> -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
>>
>> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   
>> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast   
>> CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
>> -Wl,-rpath,/p/work/borchert/gromacs-2018.1/build/lib 
>> ../../lib/libgromacs.so.3.1.0 -fopenmp -lm
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_59_tmpxft_01e3__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_57_tmpxft_a64f__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_71_tmpxft_03a4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_58_tmpxft_a80b__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_a10b__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_57_tmpxft_9bd7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_50_tmpxft_a9c8__21_pme_compute_61_cpp1_ii_6dbf966c'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_a490__21_pme_solve_compute_61_cpp1_ii_06051a94'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_ab85__21_cudautils_compute_61_cpp1_ii_25933dd5'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_54_tmpxft_aefc__21_pinning_compute_61_cpp1_ii_5d0f4aae'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_56_tmpxft_ad42__21_gpu_utils_compute_61_cpp1_ii_70828085'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_57_tmpxft_a2d0__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7'
>> ../../lib/libgromacs.so.3.1.0: undefined reference to 
>> `__cudaRegisterLinkedBinary_67_tmpxft_9f4e__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
>>
>> Chris
>>
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
>> Szilárd Páll
>> Sent: Friday, April 06, 2018 10:17 AM
>> To: Discussion list for GROMACS users 
>> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
>> Subject: Re: [gmx-users] Problem with CUDA
>>
>> Hi,
>>
>> What is the reason for using the custom CMake options? What's the -rdc=true 
>> for -- I don't think it's needed and it can very well be causing the issue. 
>> Have you tried to actually do an as-vanilla-as-possible build?
>>
>> --
>> Szilárd
>>
>>
>> On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B ERDC-RDE-ITL-MS 
>> Contractor 
>> wrote:
>>> Hello. I'm taking a 

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
I think the fpic errors can't be caused by missing rdc=true because
the latter refers to the GPU _device_ code, but GROMACS does not need
relocatable device code, so that should not be necessary.
--
Szilárd


On Fri, Apr 6, 2018 at 6:33 PM, Borchert, Christopher B
ERDC-RDE-ITL-MS Contractor 
wrote:
> Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
> fpic errors without it. In fact I get fpic errors without including fpic 
> explicitly in the C/CXX flags.
>
> /usr/bin/ld: 
> CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
>  relocation R_X86_64_32 against 
> `_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
>  can not be used when making a shared object; recompile with -fPIC
>
> So I removed boost, avx2, mpi, and dynamic but get the same result. What else 
> should I remove?
>
> cmake ..
> -DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE
> -DCMAKE_C_COMPILER:FILEPATH=`which cc`
> -DCMAKE_C_FLAGS:STRING=-fPIC
> -DCMAKE_CXX_COMPILER:FILEPATH=`which CC`
> -DCMAKE_CXX_FLAGS:STRING=-fPIC
> -DCMAKE_INSTALL_PREFIX:PATH=$PREFIX
> -DGMX_FFT_LIBRARY=fftw3
> -DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so
> -DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC
> -DGMX_GPU=ON
> -DCUDA_NVCC_FLAGS:STRING="-rdc=true"
>
> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   
> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast   
> CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
> -Wl,-rpath,/p/work/borchert/gromacs-2018.1/build/lib 
> ../../lib/libgromacs.so.3.1.0 -fopenmp -lm
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_59_tmpxft_01e3__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_a64f__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_71_tmpxft_03a4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_58_tmpxft_a80b__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_a10b__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_9bd7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_50_tmpxft_a9c8__21_pme_compute_61_cpp1_ii_6dbf966c'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_a490__21_pme_solve_compute_61_cpp1_ii_06051a94'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_ab85__21_cudautils_compute_61_cpp1_ii_25933dd5'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_54_tmpxft_aefc__21_pinning_compute_61_cpp1_ii_5d0f4aae'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_ad42__21_gpu_utils_compute_61_cpp1_ii_70828085'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_a2d0__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7'
> ../../lib/libgromacs.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_67_tmpxft_9f4e__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
>
> Chris
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
> Szilárd Páll
> Sent: Friday, April 06, 2018 10:17 AM
> To: Discussion list for GROMACS users 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Problem with CUDA
>
> Hi,
>
> What is the reason for using the custom CMake options? What's the -rdc=true 
> for -- I don't think it's needed and it can very well be causing the issue. 
> Have you tried to actually do an as-vanilla-as-possible build?
>
> --
> Szilárd
>
>
> On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B ERDC-RDE-ITL-MS 
> Contractor 
> wrote:
>> Hello. I'm taking a working build from a co-worker and trying to add GPU 
>> support on a Cray XC. CMake works but make fails. Both 2016 and 2018 die at 
>> the same point -- can't find gromac's own routines.
>>
>> 2016.5:
>> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
>> -std=c++0x   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
>> -dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
>> 

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Borchert, Christopher B ERDC-RDE-ITL-MS Contractor
Thanks Szilárd. My understanding is rdc is nvcc's equivalent of fpic. I get 
fpic errors without it. In fact I get fpic errors without including fpic 
explicitly in the C/CXX flags.

/usr/bin/ld: 
CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda.cu.o:
 relocation R_X86_64_32 against 
`_Z58nbnxn_kernel_ElecEwQSTabTwinCut_VdwLJEwCombLB_F_prune_cuda11cu_atomdata10cu_nbparam8cu_plistb'
 can not be used when making a shared object; recompile with -fPIC

So I removed boost, avx2, mpi, and dynamic but get the same result. What else 
should I remove?

cmake ..   
-DCMAKE_VERBOSE_MAKEFILE:BOOL=TRUE  
-DCMAKE_C_COMPILER:FILEPATH=`which cc` 
-DCMAKE_C_FLAGS:STRING=-fPIC  
-DCMAKE_CXX_COMPILER:FILEPATH=`which CC` 
-DCMAKE_CXX_FLAGS:STRING=-fPIC  
-DCMAKE_INSTALL_PREFIX:PATH=$PREFIX  
-DGMX_FFT_LIBRARY=fftw3 
-DFFTWF_LIBRARY:FILEPATH=${FFTW_DIR}/libfftw3f.so  
-DFFTWF_INCLUDE_DIR:PATH=$FFTW_INC  
-DGMX_GPU=ON  
-DCUDA_NVCC_FLAGS:STRING="-rdc=true"

/opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -fPIC -std=c++11   -O3 
-DNDEBUG -funroll-all-loops -fexcess-precision=fast   
CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
-Wl,-rpath,/p/work/borchert/gromacs-2018.1/build/lib 
../../lib/libgromacs.so.3.1.0 -fopenmp -lm 
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_59_tmpxft_01e3__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_a64f__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_71_tmpxft_03a4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_58_tmpxft_a80b__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_a10b__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_9bd7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_50_tmpxft_a9c8__21_pme_compute_61_cpp1_ii_6dbf966c'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_a490__21_pme_solve_compute_61_cpp1_ii_06051a94'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_ab85__21_cudautils_compute_61_cpp1_ii_25933dd5'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_54_tmpxft_aefc__21_pinning_compute_61_cpp1_ii_5d0f4aae'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_56_tmpxft_ad42__21_gpu_utils_compute_61_cpp1_ii_70828085'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_57_tmpxft_a2d0__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7'
../../lib/libgromacs.so.3.1.0: undefined reference to 
`__cudaRegisterLinkedBinary_67_tmpxft_9f4e__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'

Chris

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd 
Páll
Sent: Friday, April 06, 2018 10:17 AM
To: Discussion list for GROMACS users 
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Problem with CUDA

Hi,

What is the reason for using the custom CMake options? What's the -rdc=true for 
-- I don't think it's needed and it can very well be causing the issue. Have 
you tried to actually do an as-vanilla-as-possible build?

--
Szilárd


On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B ERDC-RDE-ITL-MS 
Contractor 
wrote:
> Hello. I'm taking a working build from a co-worker and trying to add GPU 
> support on a Cray XC. CMake works but make fails. Both 2016 and 2018 die at 
> the same point -- can't find gromac's own routines.
>
> 2016.5:
> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
> -std=c++0x   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
> -dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
> -Wl,-rpath,/p/work/cots/gromacs-2016.5/build/lib:/opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs
>  -dynamic ../../lib/libgromacs_mpi.so.2.5.0 -fopenmp -lcudart 
> /opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs/libnvidia-ml.so
>  -lhwloc -lz -ldl -lrt -lm -lfftw3f
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_59_tmpxft_0001bc78__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined 

Re: [gmx-users] chain separator issue for the "gmx do_dssp": do NOT need to provide a "ss.map" file

2018-04-06 Thread ZHANG Cheng
Also, can the "map file format" page be updated with "chain separator"?
http://manual.gromacs.org/online/map.html




9
~  Coil   1   1   1
E   B-Sheet   1   0   0
B  B-Bridge   0   0   0
S  Bend   0 0.5   0
T  Turn   1   1   0
H   A-Helix   0   0   1
I   5-Helix 0.5   0 0.5
G   3-Helix 0.5 0.5 0.5
=   Chain_Separator 0.9 0.9 0.9





-- Original --
From:  "ZHANG Cheng"<272699...@qq.com>;
Date:  Fri, Apr 6, 2018 10:13 PM
To:  "gromacs.org_gmx-users";"ZHANG 
Cheng"<272699...@qq.com>;

Subject:  chain separator issue for the "gmx do_dssp": do NOT need to provide a 
"ss.map" file



I would like to share my answer for chain separator issue for the "gmx 
do_dssp". Millions of thanks to Carsten!


The "gmx do_dssp" will output an additional line as chain separator between two 
chains. We do NOT need to provide a "ss.map" file in our working directory, and 
the command will find the default "ss.map" file automatically, and the ss.xpm 
file will have a line of "" as the chain separator.


I created my own ss.map file based on 
http://manual.gromacs.org/online/map.html, and got the "~~" as chain 
separator, which is the same as coils. So do not do this.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-06 Thread Szilárd Páll
I went on to refresh my memory a bit and I thought I'd share a summary
for those curious.

The actual clocks CPU will be running at are determined by a number of
factors: base and boost clocks (by spec), actual boost clocks
achievable (depending on cores used, workload, TDP) and this is not
hugely different across architectures.

When it comes to Intel, things are somewhat complicated. Since Haswell
the base clocks depend on the vector instructions used: there is a
"non-AVX" base which is what the official specs show, and a lower one
for AVX, generally a few hundred MHz less (and only in-depth searching
will reveal). More recently on Skylake further throttle is applied
when AVX512 code is executed; more info on Wikichip [0].

Intel has been (sadly getting away with) omitting this data on their
specs for quite long e.g. [1] and details are hard to find. For
Broadwell CPUs, here's the data sheet that shows "turbo bin" vs cores
used: [2], but that's missing the AVX clocks and I have not come
across any official info on those; there's however a decent guide that
has a lot of data nicely visualized [3]. With Skylake they've been
somewhat more open about the specs and similar data is available on
spec sheets [4], or on the previously linked site [5].

[0] https://en.wikichip.org/wiki/intel/frequency_behavior
[1] 
https://ark.intel.com/products/64593/Intel-Xeon-Processor-E5-2630-15M-Cache-2_30-GHz-7_20-GTs-Intel-QPI
[2] 
https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
[3] 
https://www.microway.com/knowledge-center-articles/detailed-specifications-of-the-intel-xeon-e5-2600v4-broadwell-ep-processors/
[4] 
https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
[5] 
https://www.microway.com/knowledge-center-articles/detailed-specifications-of-the-skylake-sp-intel-xeon-processor-scalable-family-cpus/
--
Szilárd


On Thu, Apr 5, 2018 at 10:22 AM, Jochen Hub  wrote:
>
>
> Am 03.04.18 um 19:03 schrieb Szilárd Páll:
>
>> On Tue, Apr 3, 2018 at 5:10 PM, Jochen Hub  wrote:
>>>
>>>
>>>
>>> Am 03.04.18 um 16:26 schrieb Szilárd Páll:
>>>
 On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub  wrote:
>
>
> benchmar
>
> Am 29.03.18 um 20:57 schrieb Szilárd Páll:
>
>> Hi Jochen,
>>
>> For that particular benchmark I only measured performance with
>> 1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy
>> to do the runs on all possible core counts with increments of 1, but
>> that won't tell a whole lot more than what the performance is of a run
>> using a E5-2620 v4 CPU (with some GPUs) on a certain core count. Even
>> extrapolating from that 2620 to a E5-2630 v4 and expecting to get a
>> good estimate is tricky (given that the latter has 25% more cores for
>> the same TDP!), let alone to any 26xxv4 CPU or the current-gen Skylake
>> chips which have different performance characteristics.
>>
>> As Mark notes, there are some mdp option as well as some system
>> charateristics that will have a strong influence on performance -- if
>> tens of % is something you consider "strong" (some users are fine to
>> be within a 2x ballpark :).
>>
>> What's worth considering is to try to avoid ending up strongly CPU or
>> GPU bound from start. That may admittedly could be a difficult task
>> you would run e.g. both biased MD with large pull groups and all-bonds
>> constraints with Amber FF on large-ish (>100k) systems as well as
>> vanilla MD with CHARMM FF with small-ish (<25k) systems. On the same
>> hardware the former will be more prone to be CPU-bound while the
>> latter will have relatively more GPU-heavy workload.
>>
>> There are many factors that influence the performance of a run and
>> therefore giving a the one right answer to your question is not really
>> possible. What can say is that 7-10 "core-GHz" per fast Pascal GPU is
>> generally sufficient for "typical" protein simulations to run at >=85%
>> of peak.
>
>
>
>
> Hi Szilárd,
>
> many thanks, this alrady helps me a lot. Just to get it 100% clear what
> you
> mean with core-GHz: A 10-core E5-2630v4 with 2.2 GHz would have 22
> core-GHz,
> right?



 Yes, that's what I was referring to; note that a 2630v4 won't be
 running at a 2.2 GHz base clock if you run AVX code ;)
>>>
>>>
>>>
>>> Okay, I didn't know this. What would be the base clock instead with AVX
>>> code?
>>
>>
>>
>> Short version: It's not easy to out details as Intel conveniently
>> omits it from the specs, but it's AFAIK 3-400 MHz lower; also note
>> that "turbo bins" change as a function of cores used (so you can't
>> just benchmark on a few cores leaving the rest idle). Also, the actual
>> clock speed (and overall performance) depend on other factors too so

[gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread ABEL Stephane
I know the paper but the webpage

Thanks you Viveca 

--

Message: 2
Date: Fri, 6 Apr 2018 16:45:45 +0200
From: Viveca Lindahl 
To: gmx-us...@gromacs.org
Cc: "gromacs.org_gmx-users@maillist.sys.kth.se"

Subject: Re: [gmx-users] Speed up simulations with GROMACS with
virtual interaction sites
Message-ID:

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
Hi,

What is the reason for using the custom CMake options? What's the
-rdc=true for -- I don't think it's needed and it can very well be
causing the issue. Have you tried to actually do an
as-vanilla-as-possible build?

--
Szilárd


On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B
ERDC-RDE-ITL-MS Contractor 
wrote:
> Hello. I'm taking a working build from a co-worker and trying to add GPU 
> support on a Cray XC. CMake works but make fails. Both 2016 and 2018 die at 
> the same point -- can't find gromac's own routines.
>
> 2016.5:
> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
> -std=c++0x   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
> -dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
> -Wl,-rpath,/p/work/cots/gromacs-2016.5/build/lib:/opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs
>  -dynamic ../../lib/libgromacs_mpi.so.2.5.0 -fopenmp -lcudart 
> /opt/nvidia/cudatoolkit8.0/8.0.54_2.3.12_g180d272-2.2/lib64/stubs/libnvidia-ml.so
>  -lhwloc -lz -ldl -lrt -lm -lfftw3f
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_59_tmpxft_0001bc78__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_0001bac2__21_gpu_utils_compute_61_cpp1_ii_d70ebee0'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_0001b90b__21_cudautils_compute_61_cpp1_ii_24d20763'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_71_tmpxft_0001c016__21_cuda_version_information_compute_61_cpp1_ii_e35285be'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_0001b592__21_nbnxn_cuda_compute_61_cpp1_ii_6e47f057'
> ../../lib/libgromacs_mpi.so.2.5.0: undefined reference to 
> `__cudaRegisterLinkedBinary_67_tmpxft_0001b754__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
> collect2: error: ld returned 1 exit status
>
> 2018.1:
> /opt/cray/pe/craype/2.5.13/bin/CC-march=core-avx2   -O2 -fPIC -dynamic 
> -std=c++11   -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast  
> -dynamic CMakeFiles/template.dir/template.cpp.o  -o ../../bin/template 
> -Wl,-rpath,/p/work/cots/gromacs-2018.1/build/lib 
> ../../lib/libgromacs_mpi.so.3.1.0 -fopenmp -lm
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_68a5__21_pme_3dfft_compute_61_cpp1_ii_79dff388'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_67_tmpxft_6621__21_nbnxn_cuda_data_mgmt_compute_61_cpp1_ii_a1eafeba'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_6f47__21_pme_spread_compute_61_cpp1_ii_d982d3ad'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_6d70__21_pme_solve_compute_61_cpp1_ii_06051a94'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_59_tmpxft_8da7__21_pmalloc_cuda_compute_61_cpp1_ii_63d60154'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_50_tmpxft_7930__21_pme_compute_61_cpp1_ii_6dbf966c'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_58_tmpxft_7382__21_pme_timings_compute_61_cpp1_ii_75ae0e44'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_6b11__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_7f9f__21_cudautils_compute_61_cpp1_ii_25933dd5'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_54_tmpxft_88f9__21_pinning_compute_61_cpp1_ii_5d0f4aae'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_57_tmpxft_39b7__21_nbnxn_cuda_compute_61_cpp1_ii_f147f02c'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_71_tmpxft_91d4__21_cuda_version_information_compute_61_cpp1_ii_8ab8dc1d'
> ../../lib/libgromacs_mpi.so.3.1.0: undefined reference to 
> `__cudaRegisterLinkedBinary_56_tmpxft_8407__21_gpu_utils_compute_61_cpp1_ii_70828085'
> collect2: error: ld returned 1 exit status
>
> BUILD INSTRUCTIONS:
> module swap PrgEnv-cray PrgEnv-gnu
> module swap gcc gcc/5.3.0
> export CRAYPE_LINK_TYPE=dynamic
>
> module load cudatoolkit/8.0.54_2.3.12_g180d272-2.2
> module load cmake/gcc-6.3.0/3.7.2
> module load fftw/3.3.4.11
> export BOOST_DIR=/app/unsupported/boost/1.64.0-gcc-6.3.0
>
> export 

Re: [gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread Viveca Lindahl
Hi,

There is this ancient gromacs page which I guess you already found:
http://www.gromacs.org/Documentation/How-tos/Removing_fastest_degrees_of_freedom

Here are a couple of references, the second for using virtual sites for
CHARMM lipids:

Bjelkmar, P., Larsson, P., Cuendet, M. A., Hess, B. & Lindahl, E.
Implementation of the CHARMM force field in GROMACS: Analysis of protein
stability effects from correction maps, virtual interaction sites, and
water models. Journal of Chemical Theory and Computation 6, 459–466 (2010).
Loubet, B., Kopec, W. & Khandelia, H. Accelerating all-atom MD simulations
of lipids using a modified virtual-sites technique. Journal of Chemical
Theory and Computation 10, 5690–5695 (2014).


--
Viveca


On Fri, Apr 6, 2018 at 9:51 AM, ABEL Stephane  wrote:

> Hi gmx users,
>
> I know that it is possible to speed up  the simulations by a factor 2 (by
> using a larger timestep) in GROMACS with virtual interaction sites. By I do
> not find a clear procedure on the web in particular if I use CHARMM. Do you
> have any pointers or procedures and examples of mdp files to share with me
>
> Thanks
>
> Stéphane
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] chain separator issue for the "gmx do_dssp": do NOT need to provide a "ss.map" file

2018-04-06 Thread ZHANG Cheng
I would like to share my answer for chain separator issue for the "gmx 
do_dssp". Millions of thanks to Carsten!


The "gmx do_dssp" will output an additional line as chain separator between two 
chains. We do NOT need to provide a "ss.map" file in our working directory, and 
the command will find the default "ss.map" file automatically, and the ss.xpm 
file will have a line of "" as the chain separator.


I created my own ss.map file based on 
http://manual.gromacs.org/online/map.html, and got the "~~" as chain 
separator, which is the same as coils. So do not do this.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How to search answers for previous posts?

2018-04-06 Thread Szilárd Páll
Use google the "site:" keyword is ideal for that.
--
Szilárd


On Fri, Apr 6, 2018 at 3:51 PM, ZHANG Cheng <272699...@qq.com> wrote:
> Dear Gromacs,
> I know I can see all the post from
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/
>
>
> but can I search from this link? I do not want to download all of them to my 
> PC.
>
>
> Thank you.
>
>
> Yours sincerely
> Cheng
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] How to search answers for previous posts?

2018-04-06 Thread ZHANG Cheng
Dear Gromacs,
I know I can see all the post from
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/


but can I search from this link? I do not want to download all of them to my PC.


Thank you.


Yours sincerely
Cheng
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread Anthony Nash
Hi Stephane,

I hope it is useful. I am afraid I have very little charmm experience. I used 
the approach posted for an amber ff.   I hope someone can help with the charmm 
aspect. 

Kind regards
Anthony Nash PhD MRSC
Department of Physiology, Anatomy, and Genetics
University of Oxford

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] on behalf of ABEL Stephane 
[stephane.a...@cea.fr]
Sent: 06 April 2018 11:20
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] Speed up simulations with GROMACS with virtual 
interaction sites

Many thanks Anthony

I will read your post blog. I have another quick question: Does the approach 
work by defaults (w/o modifications) for other biomolecules such as surfactants 
or it is necessary to construct a virtual site table as we can found for 
protein in the charmm*.ff distribution ?

Stéphane



--

Message: 3
Date: Fri, 6 Apr 2018 09:33:46 +
From: Anthony Nash 
To: "gmx-us...@gromacs.org" 
Subject: Re: [gmx-users] Speed up simulations with GROMACS with
virtual interaction sites
Message-ID:
<84462751076e544cbb9e12bcca3e2d49389...@mbx12.ad.oak.ox.ac.uk>
Content-Type: text/plain; charset="iso-8859-1"


Hi,

I tried something like this about a year ago and I put an instructional blog 
post together. Warning: it was pulled together from a paper I found and not my 
own efforts although I did get it to work. Any questions I might not be able to 
respond, I'm on vacation!

https://distributedscience.wordpress.com/2017/06/19/speeding-up-md-simulations-in-explicit-solvent/

Kind regards
Anthony Nash PhD MRSC
Department of Physiology, Anatomy, and Genetics
University of Oxford

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] on behalf of ABEL Stephane 
[stephane.a...@cea.fr]
Sent: 06 April 2018 08:51
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] Speed up simulations with GROMACS with virtual 
interaction sites

Hi gmx users,

I know that it is possible to speed up  the simulations by a factor 2 (by using 
a larger timestep) in GROMACS with virtual interaction sites. By I do not find 
a clear procedure on the web in particular if I use CHARMM. Do you have any 
pointers or procedures and examples of mdp files to share with me

Thanks

St?phane

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


--

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

End of gromacs.org_gmx-users Digest, Vol 168, Issue 24
**
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread ABEL Stephane
Many thanks Anthony

I will read your post blog. I have another quick question: Does the approach 
work by defaults (w/o modifications) for other biomolecules such as surfactants 
or it is necessary to construct a virtual site table as we can found for 
protein in the charmm*.ff distribution ? 

Stéphane



--

Message: 3
Date: Fri, 6 Apr 2018 09:33:46 +
From: Anthony Nash 
To: "gmx-us...@gromacs.org" 
Subject: Re: [gmx-users] Speed up simulations with GROMACS with
virtual interaction sites
Message-ID:
<84462751076e544cbb9e12bcca3e2d49389...@mbx12.ad.oak.ox.ac.uk>
Content-Type: text/plain; charset="iso-8859-1"


Hi,

I tried something like this about a year ago and I put an instructional blog 
post together. Warning: it was pulled together from a paper I found and not my 
own efforts although I did get it to work. Any questions I might not be able to 
respond, I'm on vacation!

https://distributedscience.wordpress.com/2017/06/19/speeding-up-md-simulations-in-explicit-solvent/

Kind regards
Anthony Nash PhD MRSC
Department of Physiology, Anatomy, and Genetics
University of Oxford

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] on behalf of ABEL Stephane 
[stephane.a...@cea.fr]
Sent: 06 April 2018 08:51
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] Speed up simulations with GROMACS with virtual 
interaction sites

Hi gmx users,

I know that it is possible to speed up  the simulations by a factor 2 (by using 
a larger timestep) in GROMACS with virtual interaction sites. By I do not find 
a clear procedure on the web in particular if I use CHARMM. Do you have any 
pointers or procedures and examples of mdp files to share with me

Thanks

St?phane

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


--

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

End of gromacs.org_gmx-users Digest, Vol 168, Issue 24
**
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread Anthony Nash

Hi,

I tried something like this about a year ago and I put an instructional blog 
post together. Warning: it was pulled together from a paper I found and not my 
own efforts although I did get it to work. Any questions I might not be able to 
respond, I'm on vacation!

https://distributedscience.wordpress.com/2017/06/19/speeding-up-md-simulations-in-explicit-solvent/
 

Kind regards
Anthony Nash PhD MRSC
Department of Physiology, Anatomy, and Genetics
University of Oxford

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] on behalf of ABEL Stephane 
[stephane.a...@cea.fr]
Sent: 06 April 2018 08:51
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] Speed up simulations with GROMACS with virtual 
interaction sites

Hi gmx users,

I know that it is possible to speed up  the simulations by a factor 2 (by using 
a larger timestep) in GROMACS with virtual interaction sites. By I do not find 
a clear procedure on the web in particular if I use CHARMM. Do you have any 
pointers or procedures and examples of mdp files to share with me

Thanks

Stéphane

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Speed up simulations with GROMACS with virtual interaction sites

2018-04-06 Thread ABEL Stephane
Hi gmx users, 

I know that it is possible to speed up  the simulations by a factor 2 (by using 
a larger timestep) in GROMACS with virtual interaction sites. By I do not find 
a clear procedure on the web in particular if I use CHARMM. Do you have any 
pointers or procedures and examples of mdp files to share with me

Thanks

Stéphane 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.