Hi,
It seems to be working. I've tested it for
$WIENROOT/example_struct_files/cd16te15sb.struct on 1 node with 8 cpus.
Regards,
Sergiu Arapan
Robert Laskowski wrote:
> Hi,
> I agree iatm_proc_pack is "color" for MPI_Comm_split and according to the
> specification it should be non negative.
> T
Hi,
I agree iatm_proc_pack is "color" for MPI_Comm_split and according to the
specification it should be non negative.
The idea behind the code around line 319
if (jatom.le.nat.and.myid_vec.eq.0) then
iatm_proc_pack=jatom_pe
else if (jatom.le.nat) then
iatm_proc_pack=nat
Dear wien users:
I can also confirm that there is a problem with lapw2_mpi and
openmpi for some specific combinations of
the number of atoms and number of processors. The same calculations
with mpich or hpmpi work fine.
Looking at the source files I concluded that the problem was
related to u
This is a real bug, and the current belief is that using for the
second condition
iatm_proc_pack=0
is OK -- the main author of the mpi aspects is on holiday (the last I
heard). Oddly enough this bug does not matter with mpich, mvapich,
sgimpi, no idea why.
N.B., I will warn you that openmp
I can confirm that there is a problem with lapw2_mpi with openmpi with
the latest version of intel mkl -- the problem does not appear to be
there for mpich. I will send Peter/Robert directly a small example
case.
On Wed, Dec 23, 2009 at 6:08 AM, Sergiu Arapan
wrote:
> Dear wien2k users and devel
I tested cd16te15sb-case (1 k-point) on CentOS 5 64bit using WIEN2k_09.2
(Release 29/9/2009) + ifort 11.0.074 + Intel MKL 10.1.0.015 + MVAPICH2.
I had no problems. Here is my machines-file:
$ cat .machines
granularity:1
1:node26 node26 node26 node26 node26 node26 node26 node26
lapw0:node26:8
Oleg
I have not yet had a chance to install the very latest version of
Wien2k, but I can say, unconditionally, that with the version before
the current one there is NO problem running lapw2_mpi.
I would worry a bit about OpenMPI - have you tried mvapich?
On Wed, Dec 23, 2009 at 6:08 AM, Sergiu Arapan
Thank you. Those are interesting finding, especially the first one. I
haven't dug that deep into the code after finding the alternative way.
Marry Xmas.
--
Duy Le
PhD Student
Department of Physics
University of Central Florida.
"Men don't need hand t
Dear wien2k users and developers,
I would like to post few comments on running parallel version of wien2k
on a distributed memory cluster. I'm using the most recent version of
wien2k (09.2) on a Linux-based cluster with 805 HP ProLiant DL140 G3
nodes, each node consisting of Intel Xeon E5345 Qu
9 matches
Mail list logo