Hi,
There are some keywords that reduce the memory usage in parallel
calculations, like ON.LowerMemory and a couple of others, check with the
manual. Besides these, what you can do is increase the number of nodes
(pretty obvious), shrink the basis set (obvious, too), use basis orbitals
with a
I also found something wired. I was optimize a
nanostructure, and if I do a grep max:
siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV)
dDmax Ef(eV)
Max0.077525
Max0.077525constrained
* Maximum dynamic memory allocated = 188 MB
siesta: iscf Eharris(eV) E_KS(eV)
Dear SIESTA developers
Actually I tested the memory usage with the SZP basis
against the SZ basis. For the bulk Si, SZP
calculations require twice as much memory as the SZ
calculations. However for the nanowire (2600 atoms),
the memory usage jumps from 3.4 GB to more than 14GB!
Is it normal?
Dear SIESTA developers,
I am running SIESTA for some silicon nanowires, the
memory of each node in my cluster is 4GB. when the
atom is less than 1000, the vmem and mem is roughly
the same, for example:
resources_used.mem = 2213804kb
resources_used.vmem = 2258976kb
Then after that, the
4 matches
Mail list logo