The example you showed us, was a k-parallel job on only one node.

To fix this, just set USE_REMOTE to zero (either in $WIENROOT permanently, or temporarily in your submitted job script.


Another test would be to make a new wien2k-installation using "ifort+slurm" in siteconfig. It may work out of the box, in particular when using mpi-parallel. It uses   srun,  but I'm not sure if all slum-configurations are identical to your cluster.


Am 21.06.2023 um 22:58 schrieb Ilias Miroslav, doc. RNDr., PhD.:
Dear all,

ad: https://www.mail-archive.com/[email protected]/msg22588.html " In order to use multiple nodes, you need to be able to do passwordless ssh to the allocated nodes (or any other command substituting ssh). "

According to our cluster admin, one can use (maybe) 'srun' to allocate and connect to a batch node. https://hpc.gsi.de/virgo/slurm/resource_allocation.html

Would  it possible to use  "srun" within Wien2k scripts to run parallel jobs please ?  We are using common disk space on that cluster.

Best, Miro

_______________________________________________
Wien mailing list
[email protected]
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/[email protected]/index.html

--
-----------------------------------------------------------------------
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:[email protected] WWW:http://www.imc.tuwien.ac.at WIEN2k:http://www.wien2k.at
-------------------------------------------------------------------------
_______________________________________________
Wien mailing list
[email protected]
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/[email protected]/index.html

Reply via email to