Hello all,
just a short final note following the "-quota 8" option running on 8
nodes. (from Peter: "PPS: -quota 8 (or 24) might help and still
utilizing all cores, but I'm not sure if it would save enough memory in
the current steps.")
I did run the nmr calculation with "x_nmr_lapw -p
This is not an error, it was done in such a way just for simplicity: when the
task starts it makes the file.error with such words, and it is to be changed
only when (if) the task completes successfully.
Best wishes,
Lyudmila Dobysheva
--
13 may 2024г., 22:24 +04:00 from Straus, Daniel B
Everything is fine.
It is the default behavior of wien2k, that during running a dummy error
message is printed. When the step completes, the error file gets zero size.
Suppose, the jobstep is killed by the OS (eg. out of memory,..) or by
the user, then the starting shell script (run_lapw)
Hi,
I am trying to run WIEN2k 23.2 on a Slurm cluster using a modified version of
the example scripts to make the .machines file.
The jobs seem to be running okay, but there are nondescript messages in the
.error files I am trying to figure out.
For instance, when running a 4 node job with
Dear Laurence,
I used 40 k-points.
The integration part makes no problems (-mode integ), the memory
consuming part is the current part (-mode current).
Your hint for lapw1 shows even more that it would be safer to use 4
parallel calculations instead of eight without loosing much
For my own curiosity, is it 40,000 k-points or 40 k-points?
N.B., as Peter suggested, did you try using mpi, which would be four of
nmr_integ:localhost:2
I suspect (but might be wrong) that this will reduce you memory useage by a
factor of 2, and will only be slightly slower than what you have.
Hello all,
as far as I can see it, a job with 8 cores may be faster, but uses
double of the space on scratch (8 partial nmr vectors with size
depending on the kmesh per direction eg. nmr_mqx instead of 4 partial
vectors) and that also doubles the RAM usage of the NMR current
calculation
7 matches
Mail list logo