you can try to figure out what's happening looking to

- the way the processors are split in the various parallelization scheme (npool, nband,ntask, ndiag). this is written at the beginning of the output. openmp parallelization can also be enabled. it does not always help. - the dimensions of your system (number of bands, number of planevawes, fft grid dimensions). - the time spent in the different routines, including the parallel communication time. this is given at the end of your output and depends on the speed and latency of the interconnection between processors.

A concern in the calculation might be the available RAM memory. If the code starts swapping it's going to get very slow.

Another concern is the I/O to disk that is generally slow and even slower in parallel. Always use local scratch areas, never write on a remote disk.
Possibly don't write at all.

stefano

On 06/11/2015 22:18, Mofrad, Amir Mehdi (MU-Student) wrote:

Dear all QE users and developers,


I have done an scf calculation on 1 processor which took me 11h37m. When I ran it on 4 processors it took 5h29m. I'm running the same calculation on 8 processors and it has been taking 5h17m already. Isn't it supposed to take less than 5 hours when I'm running it on 8 processors instead of 4 processors?

I used the following command for parallelization: " *mpirun -np 8 pw.x -inp Siliceous-SOD.in Siliceous_SOD8out &>* *Siliceous_SOD8.screen </dev/null & *"

I used to use "*mpirun -np 4 pw.x <inputfile> output*"**to parallelize before, however, it took forever (as if it were being idle).

At this stage I really need to do my calculations in parallel and I don't know what the problem is. One thing that I'm sure is that OPENMP and MPI are completely and properly installed on my system.


Any help would be thoroughly appreciated.


Amir M. Mofrad

University of Missouri


_______________________________________________
Pw_forum mailing list
[email protected]
http://pwscf.org/mailman/listinfo/pw_forum

_______________________________________________
Pw_forum mailing list
[email protected]
http://pwscf.org/mailman/listinfo/pw_forum

Reply via email to