Re: [gmx-users] Error while re-compiling Gromacs 5.1.2 with GPU and gcc 5.3.1
Em 31-05-2016 17:25, Szilárd Páll escreveu: Hi, Just because gcc 5.3 and CUDA/nvcc 7.5 are in some Ubuntu repos (partner AFAIR), it does not mean they're automatically compatible. The NVIDIA documentation clearly indicates they're not. Surely the ubuntu cuda maintainers patched 'include/host_config.h' This is from the 7.5 nvidia packages: = #if defined(__GNUC__) #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9) #error -- unsupported GNU version! gcc versions later than 4.9 are not supported! = IMHO there may be a good reason for nvidia to do it (Please correct me if I'm wrong) : https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Bash scripting and Gromacs
On 26-04-2016 10:33, James Starlight wrote: No, in my case it recognize ? like a ? in script I have for sim in ${HOME}/${tit}* ; do if [[ -d $sim ]]; then simulation=$(basename "$sim") cd ${sim} rm dd_dump_err*.pdb trjconv -s md_${tit}?.tpr -f md_${tit}?.trr -o ${HOME}/output/${simulation}.xtc -n -pbc mol -ur compact -fit trans < ${HOME}/enter.txt and gromacs sent trjconv -s md_resp_complex?.tpr -f md_resp_complex?.trr -o /nfs_homes/clouddyn/MD_bench/Resp_cyt_cluster/output/resp_complex_conf7.xtc -n -pbc mol -ur compact -fit trans Is it possible specify more flexible e.g 1 of any character or 2 of any characters within the file name etc like REGEX syntax Thanks! Brace expansion can be useful in such cases too: http://www.thegeekstuff.com/2010/06/bash-shell-brace-expansion/ -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] grid cell out of range
On 08-12-2015 16:56, Szilárd Páll wrote: Hi, First obvious thing: v4.6.1 is severely outdated, so is the compiler used, gcc 4.4. Does 4.6.7 compiled with something more recent reproduce the issue? No it doesn't, even with gcc 4.4 it runs fine, but the user requested this specfic version (4.6.1) . :/ I've sent her your response, thanks a lot Szilard! [ ]'s -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Testing gromacs on IBM POWER8
On 27-08-2015 04:17, Mark Abraham wrote: Hi, I have no idea what you're trying to show with these graphs. Your vertical axis of time makes it looks like a 2.5 year old AMD chip is walking all over POWER8? Other points 1) GROMACS doesn't use configure or fortran, so don't mention that This is part of a report in which we're testing Quantum Espresso as well. So that's why there's configure and fortran there. 2) these simulations do not use lapack or blas, so don't mention that We were not aware of this. Can you please point to benchmarks that use blas/lapack, or how can we tell if it does or not ? 3) you need to clarify what a "CPU" is... core, hardware thread? We've edited the graphics to make it more clear. 4) when you're using fewer cores than the whole node, then you need to report how you are managing thread affinity We're setting 1 mpi process per core then n openmp threads to its hardware threads. OpenMP's BIND_PROC and mpirun's bind-to-core. 5) the splines on the graphs are misleading when reporting discrete quantities Would it be clearer using solid bars? 6) you need to report times that are measured after auto-tuning completes Should I discard the time spent before auto-tune completes? 7) you need to report whether you are using MPI, thread-MPI or OpenMP to distribute work to threads. We only used MPI and OpenMP. TIA, Fabricio -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Testing gromacs on IBM POWER8
On 27-08-2015 10:03, Szilárd Páll wrote: A few more things to add: - we don't have CPU SIMD kernels for Power8, so comparing the plain C kernels against AVX kernels on AMD is not exactly fair; That was our hypothesis after seeing the results. One of our goals was to assess the readiness of some softwares that are popular among our clients; See if is it too early for them to consider purchasing POWER8 hardware. - as Mark pointed out, to use resources efficiently, you'll have to manage thread affinity on Power8 (by default you'll need a stride of 8 to use one thread per core, a stride of 4 to use two, etc.); - for a realistic performance measurement use ~100-1000 atoms/core; the ".96" benchmark in a 960 atom input, running that cross 192 threads makes little sense. Are any of the inputs in the 'water' benchmark useful for this purpose? What motivates your choice of compiler flags? We had reports of good performance using these flags in a POWER7 machine. TIA, Fabricio -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Testing gromacs on IBM POWER8
Hello there We've been testing gromacs-5.1-beta1 in an IBM POWER8. The input used is 'pme.mdp' within the .96 directory of the following benchmark: ftp://ftp.gromacs.org/pub/benchmarks/water_GMX50_bare.tar.gz The following link shows our results: http://suporte.versatushpc.com.br/power/gromacs.php These are our first results. We're now running more tests using IBM XL compilers, the just launched 5.1.0 version and OpenBLAS port to POWER8. Please feel free to ask questions, suggests improvements and spot errors. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Compiling GMX for GPU
On 30-06-2015 18:45, Alex wrote: Hey Mark, Let me first try http://www.r-tutor.com/gpu-computing/cuda-installation/cuda7.0-ubuntu Somehow in a few tutorials people assumed that apt-get update/upgrade after installing that deb package would automatically install CUDA, which does not at all seem to be the case. Let's see if a manual install gives me something to write home about. Thanks, Alex Hi Alex The 'cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb' only installs the package repository information. Now you need to install the cuda sdk packages themselves. There's no need to install cuda from the .run file . [ ]'s -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.