Dear Prof. Blaha,
Thank you for the explanation.
The general idea was hard to get from the UG. 
 
Sincerely 
Mikhail
 
 
  
>Пятница, 15 октября 2021, 19:37 +03:00 от Peter Blaha 
><pbl...@theochem.tuwien.ac.at>:
> 
>I think (hope) the parallelization of this script is well described in
>the UG or simply in the "online help" using -h switch. It has multiple
>options and levels for parallelization:
>
>psi11:/psi11/scratch> optimize_abc -h
>USAGE: optimize_abc [-h -t 2/3 -sp -p -n X -FC X -d X -ctest X Y Z
>-ana X -j "run_lapw -p ..." ]
>optimizes a,(b),c lattice parameters
>-p requires the presence of .machines (single jobstep) and
>         .machines_1...4 (9) for 4 (9) parallel jobsteps in the 2D (3D) case
>
>The script makes a scf calculation for the present lattice parameter in
>the case directory. This calculation uses the standerd .machines file
>when specifying "run_lapw -p" as job.
>
>However, then it has to make changes in 4 (or 9 for the 3Dcase)
>directions. This can be done in serial or in parallel (using the -p
>switch of optimize_abc). So with -p it will span 4 (9) run_lapw jobs in
>parallel.
>If you still have more cores available, you can in addition supply
>.machines_1, .machines_2, ...4 (9) files.
>
>So suppose you have 4 nodes with 16 cores each, you could put into each
>of these .machine_X files 16 different cores (eg. in mpi), but run 4 mpi
>jobs in parallel.
>In addition you create a .machines with all 64 cores for the "starting
>job" (at least if it is still efficient for your example. Remember: a
>very small cell will run MUCH LONGER in mpi with 64 cores (or even
>crash) then on fewer cores.
>
>The "task" parallelization is MUCH more efficient then heavy mpi
>parallelization.
>
>
>
>Am 15.10.2021 um 17:28 schrieb Mikhail Nestoklon:
>> Dear wien2k community,
>> I am trying to use new script optimize_abc_lapw on a cluster. Something
>> in its behavior in terms of computer power consumption confused me and I
>> am checking how it actually works. I realized that  at some point (at
>> least when ‘doing x-zchange’) it runs lapw0 and lapw1c and not
>> lapw0_mpi, etc. The most strange part is that when it starts it
>> correctly uses mpi versions of the programs.
>> Is this correct behavior?
>> I run the script as ‘optimize_abc_lapw -p’ at the end of slurm script
>> which prepares .machines file.
>> The structure is hexagonal.
>>
>> Thank you in advance.
>> Sincerely yours,
>> Mikhail Nestoklon
>>
>> _______________________________________________
>> Wien mailing list
>>  Wien@zeus.theochem.tuwien.ac.at
>>  http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:  
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>
>--
>--------------------------------------------------------------------------
>Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
>Phone:  +43-1-58801-165300 FAX:  +43-1-58801-165982
>Email:  bl...@theochem.tuwien.ac.at WIEN2k:  http://www.wien2k.at
>WWW:  http://www.imc.tuwien.ac.at
>-------------------------------------------------------------------------
>_______________________________________________
>Wien mailing list
>Wien@zeus.theochem.tuwien.ac.at
>http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>SEARCH the MAILING-LIST at:  
>http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
 
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to