In the Siesta 4.1 manual there is a clear description (Sec. 2.3.2) on how
to utilize hybrid MPI+OpenMP using OpenMPI.

For your architecture/cluster it may be different, I would advice you
contact your local HPC admin in that case.



Den tir. 5. feb. 2019 kl. 22.03 skrev 郑仁慧 <z...@ustc.edu>:

> Dear Nick:
>      Thank you very much for your reply. I can not understand your
> explanation well. Would you like to give me an example using openmp or mpi?
> Thank you again for your kind help.
> Sincerely
> Ren-hui Zheng
>
>
>
>
>
>
> From: Nick Papior <nickpap...@gmail.com>
> Date: 2019-02-04 15:07:10
> To: siesta-l <siesta-l@uam.es>
> Subject: Re: [SIESTA-L] openmp
>
> Dear Ren-hui Zheng,
>
> Siesta can benefit from MPI, OpenMP or both.
> The benefit may be briefly summarized as follows:
> MPI has distribution across orbitals, hence a basic advice is that you get
> performance scaling with # of cores == # of atoms (rule of thumb)
> OpenMP also distributes across orbitals, but on a finer level. Here you
> *may* get better scaling with more threads than atoms, however, generally I
> would still suspect the best performance scaling up to # of cores == # of
> atoms (rule of thumb).
>
> For the hybrid (MPI + OpenMP) the same thing applies, # of MPI-procs TIMES
> # of threads per MPI-proc == # of atoms should give you reasonable scaling.
>
> Again, these are rule-of-thumbs as it also depends on your basis etc.
> Note that OpenMP may be difficult to get correct scaling since affinity
> plays a big role. How you place threads depends on your architecture and I
> would advice you to do some benchmarks on your architecture to figure out
> how to do best OpenMP.
>
> Den søn. 3. feb. 2019 kl. 22.01 skrev 郑仁慧 <z...@ustc.edu>:
>
>> Dear all:
>>        Can the siesta improves calculation speed when it optimizes
>> molecular structure using openmp or mpi. When I  optimize molecular
>> structure with openmp or mpi, the calculation speed cannot be improved. I
>> don not know the reason. Would someone like to explain it? Thank you very
>> much for your help in advance.
>>
>> Sincerely
>> Ren-hui Zheng
>>
>>
>
> --
> Kind regards Nick
>
>
>

-- 
Kind regards Nick

Responder a