Hi Mark, I've checked with Valeria and the problem is actually on the setup of the system (unfair overlapping of the temperature distribution). So I think that the loss of efficiency should be of a factor between 2 or 3 in her case and not more.
Bye, Fabio On 02/23/2011 10:31 AM, Mark Abraham wrote: > > > On 02/23/11, *Valeria Losasso * <[email protected]> wrote: >> >> Thank you Mark. I found one message of this month concerning this >> topic, and there are some small suggestions. I don't think that such a >> changes can restore a factor of 26, but it could be worth to try to >> see what happens. I will let you know. > > They won't. The problem is that every 10 (or so) MD steps every > processor does global communication to check nothing's gone wrong. That > resulted from some unrelated bits of code trying to share the same > machinery for efficiency, and treading on each others' toes. > > Mark > >> Valeria >> >> >> >> On Wed, 23 Feb 2011, Mark Abraham wrote: >> >> > >> > >> >On 02/23/11, Valeria Losasso <[email protected]> wrote: >> > >> > Dear all, >> > I am making some tests to start using replica exchange >> molecular dynamics on my system in water. The setup is ok >> > (i.e. one replica alone runs correctly), but I am not able to >> parallelize the REMD. Details follow: >> > >> > - the test is on 8 temperatures, so 8 replicas >> > - Gromacs version 4.5.3 >> > - One replica alone, in 30 minutes with 256 processors, makes >> 52500 steps. 8 replicas with 256x8 = 2048 >> > processors, make 300 (!!) steps each = 2400 in total (I arrived >> to these numbers just to see some update of the >> > log file: since I am running on a big cluster, I can not use >> more than half an hour for tests with less than 512 >> > processors) >> > - I am using mpirun with options -np 256 -s md_.tpr -multi 8 >> -replex 1000 >> > >> > >> >There have been two threads on this topic in the last month or so, >> please check the archives. The implementation of >> >multi-simulations scales poorly. The scaling of replica-exchange >> itself is not great either. I have a working version under >> >final development that scales much better. Watch this space. >> > >> >Mark >> > -- ********************************************* Fabio Affinito, PhD CINECA SuperComputing Applications and Innovation Department - SCAI Via Magnanelli, 6/3 40033 Casalecchio di Reno (Bologna) ITALY +39/051/6171794 (Phone) -- gmx-users mailing list [email protected] http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [email protected]. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

