Re: [Wien] xxmr2d:out of memory / estimate memory consumption

2023-07-04 Thread Peter Blaha
x lapw1  -nmat_only can be run quickly after   init_lapw. It runs in fractions of seconds on the frontend and just determines the matrix size and prints it into   case.nmat_only . From the matrix size and the knowledge if you have inversion or not (case.in1 or in1c) you can estimate the

Re: [Wien] WARNING elpa_setup during lapw1_mpi

2023-07-04 Thread Peter Blaha
When elpa is installed, it needs an extra flag   --enable-openmp Otherwise multiple threads are not supported. In the tar ball of elpa is a file INSTALL.md which explains clearly how to configure, compile and install elpa. --- Anyway, if you do not have a

Re: [Wien] WARNING elpa_setup during lapw1_mpi

2023-07-04 Thread Gavin Abo
That might be an issue worth asking about over in the Spack Community [1]. Not sure if I'm interpreting the message correctly, but if I am, the "@1.5.4:2" in the message from spack info I believe is indicating that thread_multiple will be disabled unless OpenMPI version 1.5.4 [2] to 2 [3] is

Re: [Wien] xxmr2d:out of memory / estimate memory consumption

2023-07-04 Thread Ilias, Miroslav
Thanks for your answer; so counting all these sizes (H,S, hpanel, spanel, spanelus...) is good way to estimate memory per one thread. Ad: "This guess indicates that you should be OK, but do your nodes really have 10Gb/core? That would be unusually large." Good point, there is some

Re: [Wien] WARNING elpa_setup during lapw1_mpi

2023-07-04 Thread Ilias, Miroslav
Hello, sorry for hidden link; the point is that the "ompi_info -a | grep THREAD" says MPI_THREAD_MULTIPLE: yes, but "spack info openmpi@4.1.5" gives "thread_multiple [off] [@1.5.4:2] on, off Enable MPI_THREAD_MULTIPLE support" Maybe this is the case of the ELPA "WARNING elpa_setup:

Re: [Wien] xxmr2d:out of memory / estimate memory consumption

2023-07-04 Thread Laurence Marks
If you look at the output you provided, each local copy of H and S is 3Gb. Adding another 3Gb for luck suggests that you would need about 360 Gb, assuming that you are only running on k-point with 36 cores. This guess indicates that you should be OK, but do your nodes really have 10Gb/core? That

Re: [Wien] WARNING elpa_setup during lapw1_mpi

2023-07-04 Thread Laurence Marks
That is a private site, do I cannot read anything. All I can suggest is doing a Google search on "missing MPI_THREAD_MULTIPLE". It looks as if you have to enable this in openmpi configure, although there might be some bugs. There could also be some environmental parameters that need to be set.

[Wien] xxmr2d:out of memory / estimate memory consumption

2023-07-04 Thread Ilias, Miroslav
Greetings, I have the big system of https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22609.html . After fixing the proper openmpi compilation of wien2k I proceed further into the lapw1_mpi module. But here I got the error "xxmr2d:out of memory" for SBATCH parameters N=1,

Re: [Wien] WARNING elpa_setup during lapw1_mpi

2023-07-04 Thread Ilias, Miroslav
Dear Professor Marks, concerning https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22621.html : I am trying to find out why is ELPA module complaining of missing MPI_THREAD_MULTIPLE . We have a debate on this https://git.gsi.de/SDEGroup/SIR/-/issues/85#note_55392 If you