Hi all,

I'm running jobs with a relatively high number of kpoints with QE 6.2. For a 
while I was using the dynamical memory estimates output by the pw.x executable 
as a guidepost for requesting memory on a cluster, and this worked well when I 
had a relatively small number of kpoints. But it turns out that for these dense 
grids I need significantly more than the estimate indicates. The estimate says 
a few hundred Mb, but the actual resources used indicate it's more on the order 
of several tens of Gb.

I find in the user manual an estimate of the number of double precision complex 
floating point numbers that would be needed (

O = mMN + PN + pN1N2N3 + qNr1Nr2Nr3

) and this seems to give an estimate on the order of what the output file says.

Is there something I'm missing that goes into determining how much memory a job 
should take? I'm mostly interested to know if there's a good way to predict how 
much memory future jobs will need so I can make a more educated guess when I 
request memory.

Any insights you can offer are greatly appreciated.

Best regards,

Eric Suter

----------------------------

PhD Candidate, Dept. of Physics and Astronomy

Center for Simulational Physics

University of Georgia

----------------------------

email: [email protected]

phone: 912-856-3071
_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list [email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users

Reply via email to